00:00:00.001 Started by upstream project "autotest-per-patch" build number 124200 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:11.913 The recommended git tool is: git 00:00:11.913 using credential 00000000-0000-0000-0000-000000000002 00:00:11.916 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:11.927 Fetching changes from the remote Git repository 00:00:11.930 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:11.941 Using shallow fetch with depth 1 00:00:11.941 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:11.941 > git --version # timeout=10 00:00:11.952 > git --version # 'git version 2.39.2' 00:00:11.952 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:11.965 Setting http proxy: proxy-dmz.intel.com:911 00:00:11.965 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:20.491 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:20.505 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:20.518 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:20.518 > git config core.sparsecheckout # timeout=10 00:00:20.532 > git read-tree -mu HEAD # timeout=10 00:00:20.549 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:20.570 Commit message: "pool: fixes for VisualBuild class" 00:00:20.570 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:20.651 [Pipeline] Start of Pipeline 00:00:20.666 [Pipeline] library 00:00:20.668 Loading library shm_lib@master 00:00:20.668 Library shm_lib@master is cached. Copying from home. 00:00:20.688 [Pipeline] node 00:00:35.690 Still waiting to schedule task 00:00:35.690 Waiting for next available executor on ‘DiskNvme&&NetCVL’ 00:15:35.260 Running on CYP10 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:35.262 [Pipeline] { 00:15:35.277 [Pipeline] catchError 00:15:35.278 [Pipeline] { 00:15:35.293 [Pipeline] wrap 00:15:35.307 [Pipeline] { 00:15:35.316 [Pipeline] stage 00:15:35.318 [Pipeline] { (Prologue) 00:15:35.500 [Pipeline] sh 00:15:35.787 + logger -p user.info -t JENKINS-CI 00:15:35.807 [Pipeline] echo 00:15:35.809 Node: CYP10 00:15:35.816 [Pipeline] sh 00:15:36.119 [Pipeline] setCustomBuildProperty 00:15:36.134 [Pipeline] echo 00:15:36.136 Cleanup processes 00:15:36.143 [Pipeline] sh 00:15:36.432 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:36.432 1990524 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:36.447 [Pipeline] sh 00:15:36.735 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:36.735 ++ grep -v 'sudo pgrep' 00:15:36.735 ++ awk '{print $1}' 00:15:36.735 + sudo kill -9 00:15:36.735 + true 00:15:36.750 [Pipeline] cleanWs 00:15:36.760 [WS-CLEANUP] Deleting project workspace... 00:15:36.760 [WS-CLEANUP] Deferred wipeout is used... 00:15:36.766 [WS-CLEANUP] done 00:15:36.770 [Pipeline] setCustomBuildProperty 00:15:36.786 [Pipeline] sh 00:15:37.069 + sudo git config --global --replace-all safe.directory '*' 00:15:37.138 [Pipeline] nodesByLabel 00:15:37.139 Found a total of 2 nodes with the 'sorcerer' label 00:15:37.147 [Pipeline] httpRequest 00:15:37.152 HttpMethod: GET 00:15:37.152 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:15:37.154 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:15:37.158 Response Code: HTTP/1.1 200 OK 00:15:37.159 Success: Status code 200 is in the accepted range: 200,404 00:15:37.159 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:15:37.301 [Pipeline] sh 00:15:37.584 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:15:37.601 [Pipeline] httpRequest 00:15:37.606 HttpMethod: GET 00:15:37.607 URL: http://10.211.164.101/packages/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:15:37.607 Sending request to url: http://10.211.164.101/packages/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:15:37.610 Response Code: HTTP/1.1 200 OK 00:15:37.610 Success: Status code 200 is in the accepted range: 200,404 00:15:37.610 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:15:39.775 [Pipeline] sh 00:15:40.062 + tar --no-same-owner -xf spdk_ee2eae53a9bd1d3096e31af60895b50305a10a5f.tar.gz 00:15:43.388 [Pipeline] sh 00:15:43.672 + git -C spdk log --oneline -n5 00:15:43.672 ee2eae53a dif: Match enum spdk_dif_pi_format with NVMe spec 00:15:43.672 a3f6419f1 app/nvme_identify: Add NVM Identify Namespace Data for ELBA Format 00:15:43.672 3b7525570 nvme: Get PI format for Extended LBA format 00:15:43.672 1e8a0c991 nvme: Get NVM Identify Namespace Data for Extended LBA Format 00:15:43.672 493b11851 nvme: Use Host Behavior Support Feature to enable LBA Format Extension 00:15:43.685 [Pipeline] } 00:15:43.704 [Pipeline] // stage 00:15:43.714 [Pipeline] stage 00:15:43.717 [Pipeline] { (Prepare) 00:15:43.735 [Pipeline] writeFile 00:15:43.752 [Pipeline] sh 00:15:44.037 + logger -p user.info -t JENKINS-CI 00:15:44.051 [Pipeline] sh 00:15:44.337 + logger -p user.info -t JENKINS-CI 00:15:44.350 [Pipeline] sh 00:15:44.642 + cat autorun-spdk.conf 00:15:44.642 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:44.642 SPDK_TEST_NVMF=1 00:15:44.642 SPDK_TEST_NVME_CLI=1 00:15:44.642 SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:44.642 SPDK_TEST_NVMF_NICS=e810 00:15:44.642 SPDK_TEST_VFIOUSER=1 00:15:44.642 SPDK_RUN_UBSAN=1 00:15:44.642 NET_TYPE=phy 00:15:44.676 RUN_NIGHTLY=0 00:15:44.681 [Pipeline] readFile 00:15:44.704 [Pipeline] withEnv 00:15:44.706 [Pipeline] { 00:15:44.718 [Pipeline] sh 00:15:45.004 + set -ex 00:15:45.004 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:15:45.004 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:45.004 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:45.004 ++ SPDK_TEST_NVMF=1 00:15:45.004 ++ SPDK_TEST_NVME_CLI=1 00:15:45.004 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:45.004 ++ SPDK_TEST_NVMF_NICS=e810 00:15:45.004 ++ SPDK_TEST_VFIOUSER=1 00:15:45.004 ++ SPDK_RUN_UBSAN=1 00:15:45.004 ++ NET_TYPE=phy 00:15:45.004 ++ RUN_NIGHTLY=0 00:15:45.004 + case $SPDK_TEST_NVMF_NICS in 00:15:45.004 + DRIVERS=ice 00:15:45.004 + [[ tcp == \r\d\m\a ]] 00:15:45.004 + [[ -n ice ]] 00:15:45.004 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:15:45.004 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:15:51.589 rmmod: ERROR: Module irdma is not currently loaded 00:15:51.589 rmmod: ERROR: Module i40iw is not currently loaded 00:15:51.589 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:15:51.589 + true 00:15:51.589 + for D in $DRIVERS 00:15:51.589 + sudo modprobe ice 00:15:51.589 + exit 0 00:15:51.599 [Pipeline] } 00:15:51.617 [Pipeline] // withEnv 00:15:51.674 [Pipeline] } 00:15:51.695 [Pipeline] // stage 00:15:51.704 [Pipeline] catchError 00:15:51.706 [Pipeline] { 00:15:51.722 [Pipeline] timeout 00:15:51.722 Timeout set to expire in 50 min 00:15:51.724 [Pipeline] { 00:15:51.740 [Pipeline] stage 00:15:51.743 [Pipeline] { (Tests) 00:15:51.759 [Pipeline] sh 00:15:52.048 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:52.048 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:52.048 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:52.048 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:15:52.048 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:52.048 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:15:52.048 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:15:52.048 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:15:52.048 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:15:52.048 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:15:52.048 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:15:52.048 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:15:52.048 + source /etc/os-release 00:15:52.048 ++ NAME='Fedora Linux' 00:15:52.048 ++ VERSION='38 (Cloud Edition)' 00:15:52.048 ++ ID=fedora 00:15:52.048 ++ VERSION_ID=38 00:15:52.048 ++ VERSION_CODENAME= 00:15:52.048 ++ PLATFORM_ID=platform:f38 00:15:52.048 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:15:52.048 ++ ANSI_COLOR='0;38;2;60;110;180' 00:15:52.048 ++ LOGO=fedora-logo-icon 00:15:52.048 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:15:52.048 ++ HOME_URL=https://fedoraproject.org/ 00:15:52.048 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:15:52.048 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:15:52.048 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:15:52.048 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:15:52.048 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:15:52.048 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:15:52.048 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:15:52.048 ++ SUPPORT_END=2024-05-14 00:15:52.048 ++ VARIANT='Cloud Edition' 00:15:52.048 ++ VARIANT_ID=cloud 00:15:52.048 + uname -a 00:15:52.048 Linux spdk-cyp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:15:52.048 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:15:55.349 Hugepages 00:15:55.349 node hugesize free / total 00:15:55.349 node0 1048576kB 0 / 0 00:15:55.349 node0 2048kB 0 / 0 00:15:55.349 node1 1048576kB 0 / 0 00:15:55.349 node1 2048kB 0 / 0 00:15:55.349 00:15:55.349 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:55.349 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:15:55.349 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:15:55.349 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:15:55.349 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:15:55.349 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:15:55.349 + rm -f /tmp/spdk-ld-path 00:15:55.349 + source autorun-spdk.conf 00:15:55.349 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:55.349 ++ SPDK_TEST_NVMF=1 00:15:55.349 ++ SPDK_TEST_NVME_CLI=1 00:15:55.349 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:55.349 ++ SPDK_TEST_NVMF_NICS=e810 00:15:55.349 ++ SPDK_TEST_VFIOUSER=1 00:15:55.349 ++ SPDK_RUN_UBSAN=1 00:15:55.349 ++ NET_TYPE=phy 00:15:55.349 ++ RUN_NIGHTLY=0 00:15:55.349 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:55.349 + [[ -n '' ]] 00:15:55.349 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:55.349 + for M in /var/spdk/build-*-manifest.txt 00:15:55.349 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:55.349 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:15:55.349 + for M in /var/spdk/build-*-manifest.txt 00:15:55.349 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:55.349 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:15:55.349 ++ uname 00:15:55.349 + [[ Linux == \L\i\n\u\x ]] 00:15:55.349 + sudo dmesg -T 00:15:55.349 + sudo dmesg --clear 00:15:55.349 + dmesg_pid=1991964 00:15:55.349 + [[ Fedora Linux == FreeBSD ]] 00:15:55.349 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:55.349 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:55.349 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:55.349 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:15:55.349 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:15:55.349 + [[ -x /usr/src/fio-static/fio ]] 00:15:55.349 + export FIO_BIN=/usr/src/fio-static/fio 00:15:55.349 + FIO_BIN=/usr/src/fio-static/fio 00:15:55.349 + sudo dmesg -Tw 00:15:55.349 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:55.349 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:55.349 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:55.349 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:55.349 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:55.349 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:55.349 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:55.349 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:55.349 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:15:55.349 Test configuration: 00:15:55.349 SPDK_RUN_FUNCTIONAL_TEST=1 00:15:55.349 SPDK_TEST_NVMF=1 00:15:55.349 SPDK_TEST_NVME_CLI=1 00:15:55.349 SPDK_TEST_NVMF_TRANSPORT=tcp 00:15:55.349 SPDK_TEST_NVMF_NICS=e810 00:15:55.349 SPDK_TEST_VFIOUSER=1 00:15:55.349 SPDK_RUN_UBSAN=1 00:15:55.349 NET_TYPE=phy 00:15:55.349 RUN_NIGHTLY=0 11:24:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.349 11:24:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:55.349 11:24:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.349 11:24:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.349 11:24:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.349 11:24:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.349 11:24:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.349 11:24:24 -- paths/export.sh@5 -- $ export PATH 00:15:55.349 11:24:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.349 11:24:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:15:55.349 11:24:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:15:55.349 11:24:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718011464.XXXXXX 00:15:55.349 11:24:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718011464.sXAgaw 00:15:55.349 11:24:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:15:55.349 11:24:24 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:15:55.349 11:24:24 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:15:55.349 11:24:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:15:55.349 11:24:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:15:55.349 11:24:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:15:55.349 11:24:24 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:15:55.349 11:24:24 -- common/autotest_common.sh@10 -- $ set +x 00:15:55.349 11:24:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:15:55.350 11:24:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:15:55.350 11:24:24 -- pm/common@17 -- $ local monitor 00:15:55.350 11:24:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:55.350 11:24:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:55.350 11:24:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:55.350 11:24:24 -- pm/common@21 -- $ date +%s 00:15:55.350 11:24:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:55.350 11:24:24 -- pm/common@25 -- $ sleep 1 00:15:55.350 11:24:24 -- pm/common@21 -- $ date +%s 00:15:55.350 11:24:24 -- pm/common@21 -- $ date +%s 00:15:55.350 11:24:24 -- pm/common@21 -- $ date +%s 00:15:55.350 11:24:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011464 00:15:55.350 11:24:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011464 00:15:55.350 11:24:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011464 00:15:55.350 11:24:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011464 00:15:55.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011464_collect-vmstat.pm.log 00:15:55.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011464_collect-cpu-load.pm.log 00:15:55.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011464_collect-cpu-temp.pm.log 00:15:55.350 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011464_collect-bmc-pm.bmc.pm.log 00:15:56.293 11:24:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:15:56.293 11:24:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:56.293 11:24:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:56.293 11:24:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:56.293 11:24:25 -- spdk/autobuild.sh@16 -- $ date -u 00:15:56.293 Mon Jun 10 09:24:25 AM UTC 2024 00:15:56.293 11:24:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:56.293 v24.09-pre-60-gee2eae53a 00:15:56.293 11:24:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:15:56.293 11:24:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:56.293 11:24:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:56.293 11:24:25 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:15:56.293 11:24:25 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:15:56.293 11:24:25 -- common/autotest_common.sh@10 -- $ set +x 00:15:56.293 ************************************ 00:15:56.293 START TEST ubsan 00:15:56.293 ************************************ 00:15:56.293 11:24:25 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:15:56.293 using ubsan 00:15:56.293 00:15:56.293 real 0m0.001s 00:15:56.293 user 0m0.001s 00:15:56.293 sys 0m0.000s 00:15:56.293 11:24:25 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:15:56.293 11:24:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:15:56.293 ************************************ 00:15:56.293 END TEST ubsan 00:15:56.293 ************************************ 00:15:56.293 11:24:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:56.293 11:24:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:56.293 11:24:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:15:56.293 11:24:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:15:56.554 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:56.554 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:56.815 Using 'verbs' RDMA provider 00:16:12.666 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:16:24.897 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:16:24.897 Creating mk/config.mk...done. 00:16:24.897 Creating mk/cc.flags.mk...done. 00:16:24.897 Type 'make' to build. 00:16:24.897 11:24:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:16:24.897 11:24:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:16:24.897 11:24:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:16:24.897 11:24:53 -- common/autotest_common.sh@10 -- $ set +x 00:16:24.897 ************************************ 00:16:24.897 START TEST make 00:16:24.897 ************************************ 00:16:24.897 11:24:53 make -- common/autotest_common.sh@1124 -- $ make -j144 00:16:24.897 make[1]: Nothing to be done for 'all'. 00:16:25.846 The Meson build system 00:16:25.846 Version: 1.3.1 00:16:25.846 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:16:25.846 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:25.846 Build type: native build 00:16:25.846 Project name: libvfio-user 00:16:25.846 Project version: 0.0.1 00:16:25.846 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:16:25.846 C linker for the host machine: cc ld.bfd 2.39-16 00:16:25.846 Host machine cpu family: x86_64 00:16:25.846 Host machine cpu: x86_64 00:16:25.846 Run-time dependency threads found: YES 00:16:25.846 Library dl found: YES 00:16:25.846 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:16:25.846 Run-time dependency json-c found: YES 0.17 00:16:25.846 Run-time dependency cmocka found: YES 1.1.7 00:16:25.846 Program pytest-3 found: NO 00:16:25.846 Program flake8 found: NO 00:16:25.846 Program misspell-fixer found: NO 00:16:25.846 Program restructuredtext-lint found: NO 00:16:25.846 Program valgrind found: YES (/usr/bin/valgrind) 00:16:25.846 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:25.846 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:25.846 Compiler for C supports arguments -Wwrite-strings: YES 00:16:25.846 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:16:25.846 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:16:25.846 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:16:25.846 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:16:25.846 Build targets in project: 8 00:16:25.846 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:16:25.846 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:16:25.846 00:16:25.846 libvfio-user 0.0.1 00:16:25.846 00:16:25.846 User defined options 00:16:25.846 buildtype : debug 00:16:25.846 default_library: shared 00:16:25.846 libdir : /usr/local/lib 00:16:25.846 00:16:25.846 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:26.416 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:16:26.416 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:16:26.416 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:16:26.416 [3/37] Compiling C object samples/null.p/null.c.o 00:16:26.416 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:16:26.416 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:16:26.416 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:16:26.416 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:16:26.416 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:16:26.416 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:16:26.416 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:16:26.416 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:16:26.416 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:16:26.416 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:16:26.416 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:16:26.416 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:16:26.416 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:16:26.416 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:16:26.416 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:16:26.416 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:16:26.416 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:16:26.416 [21/37] Compiling C object samples/client.p/client.c.o 00:16:26.416 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:16:26.416 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:16:26.416 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:16:26.416 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:16:26.416 [26/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:16:26.416 [27/37] Compiling C object samples/server.p/server.c.o 00:16:26.676 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:16:26.676 [29/37] Linking target samples/client 00:16:26.676 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:16:26.676 [31/37] Linking target test/unit_tests 00:16:26.676 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:16:26.676 [33/37] Linking target samples/lspci 00:16:26.676 [34/37] Linking target samples/server 00:16:26.676 [35/37] Linking target samples/null 00:16:26.676 [36/37] Linking target samples/gpio-pci-idio-16 00:16:26.676 [37/37] Linking target samples/shadow_ioeventfd_server 00:16:26.676 INFO: autodetecting backend as ninja 00:16:26.676 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:26.937 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:16:27.198 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:16:27.198 ninja: no work to do. 00:16:33.783 The Meson build system 00:16:33.783 Version: 1.3.1 00:16:33.783 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:16:33.783 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:16:33.783 Build type: native build 00:16:33.783 Program cat found: YES (/usr/bin/cat) 00:16:33.783 Project name: DPDK 00:16:33.783 Project version: 24.03.0 00:16:33.783 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:16:33.783 C linker for the host machine: cc ld.bfd 2.39-16 00:16:33.783 Host machine cpu family: x86_64 00:16:33.783 Host machine cpu: x86_64 00:16:33.783 Message: ## Building in Developer Mode ## 00:16:33.783 Program pkg-config found: YES (/usr/bin/pkg-config) 00:16:33.783 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:16:33.783 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:16:33.783 Program python3 found: YES (/usr/bin/python3) 00:16:33.783 Program cat found: YES (/usr/bin/cat) 00:16:33.783 Compiler for C supports arguments -march=native: YES 00:16:33.783 Checking for size of "void *" : 8 00:16:33.783 Checking for size of "void *" : 8 (cached) 00:16:33.783 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:16:33.783 Library m found: YES 00:16:33.783 Library numa found: YES 00:16:33.783 Has header "numaif.h" : YES 00:16:33.783 Library fdt found: NO 00:16:33.783 Library execinfo found: NO 00:16:33.783 Has header "execinfo.h" : YES 00:16:33.783 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:16:33.783 Run-time dependency libarchive found: NO (tried pkgconfig) 00:16:33.783 Run-time dependency libbsd found: NO (tried pkgconfig) 00:16:33.784 Run-time dependency jansson found: NO (tried pkgconfig) 00:16:33.784 Run-time dependency openssl found: YES 3.0.9 00:16:33.784 Run-time dependency libpcap found: YES 1.10.4 00:16:33.784 Has header "pcap.h" with dependency libpcap: YES 00:16:33.784 Compiler for C supports arguments -Wcast-qual: YES 00:16:33.784 Compiler for C supports arguments -Wdeprecated: YES 00:16:33.784 Compiler for C supports arguments -Wformat: YES 00:16:33.784 Compiler for C supports arguments -Wformat-nonliteral: NO 00:16:33.784 Compiler for C supports arguments -Wformat-security: NO 00:16:33.784 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:33.784 Compiler for C supports arguments -Wmissing-prototypes: YES 00:16:33.784 Compiler for C supports arguments -Wnested-externs: YES 00:16:33.784 Compiler for C supports arguments -Wold-style-definition: YES 00:16:33.784 Compiler for C supports arguments -Wpointer-arith: YES 00:16:33.784 Compiler for C supports arguments -Wsign-compare: YES 00:16:33.784 Compiler for C supports arguments -Wstrict-prototypes: YES 00:16:33.784 Compiler for C supports arguments -Wundef: YES 00:16:33.784 Compiler for C supports arguments -Wwrite-strings: YES 00:16:33.784 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:16:33.784 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:16:33.784 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:33.784 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:16:33.784 Program objdump found: YES (/usr/bin/objdump) 00:16:33.784 Compiler for C supports arguments -mavx512f: YES 00:16:33.784 Checking if "AVX512 checking" compiles: YES 00:16:33.784 Fetching value of define "__SSE4_2__" : 1 00:16:33.784 Fetching value of define "__AES__" : 1 00:16:33.784 Fetching value of define "__AVX__" : 1 00:16:33.784 Fetching value of define "__AVX2__" : 1 00:16:33.784 Fetching value of define "__AVX512BW__" : 1 00:16:33.784 Fetching value of define "__AVX512CD__" : 1 00:16:33.784 Fetching value of define "__AVX512DQ__" : 1 00:16:33.784 Fetching value of define "__AVX512F__" : 1 00:16:33.784 Fetching value of define "__AVX512VL__" : 1 00:16:33.784 Fetching value of define "__PCLMUL__" : 1 00:16:33.784 Fetching value of define "__RDRND__" : 1 00:16:33.784 Fetching value of define "__RDSEED__" : 1 00:16:33.784 Fetching value of define "__VPCLMULQDQ__" : 1 00:16:33.784 Fetching value of define "__znver1__" : (undefined) 00:16:33.784 Fetching value of define "__znver2__" : (undefined) 00:16:33.784 Fetching value of define "__znver3__" : (undefined) 00:16:33.784 Fetching value of define "__znver4__" : (undefined) 00:16:33.784 Compiler for C supports arguments -Wno-format-truncation: YES 00:16:33.784 Message: lib/log: Defining dependency "log" 00:16:33.784 Message: lib/kvargs: Defining dependency "kvargs" 00:16:33.784 Message: lib/telemetry: Defining dependency "telemetry" 00:16:33.784 Checking for function "getentropy" : NO 00:16:33.784 Message: lib/eal: Defining dependency "eal" 00:16:33.784 Message: lib/ring: Defining dependency "ring" 00:16:33.784 Message: lib/rcu: Defining dependency "rcu" 00:16:33.784 Message: lib/mempool: Defining dependency "mempool" 00:16:33.784 Message: lib/mbuf: Defining dependency "mbuf" 00:16:33.784 Fetching value of define "__PCLMUL__" : 1 (cached) 00:16:33.784 Fetching value of define "__AVX512F__" : 1 (cached) 00:16:33.784 Fetching value of define "__AVX512BW__" : 1 (cached) 00:16:33.784 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:16:33.784 Fetching value of define "__AVX512VL__" : 1 (cached) 00:16:33.784 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:16:33.784 Compiler for C supports arguments -mpclmul: YES 00:16:33.784 Compiler for C supports arguments -maes: YES 00:16:33.784 Compiler for C supports arguments -mavx512f: YES (cached) 00:16:33.784 Compiler for C supports arguments -mavx512bw: YES 00:16:33.784 Compiler for C supports arguments -mavx512dq: YES 00:16:33.784 Compiler for C supports arguments -mavx512vl: YES 00:16:33.784 Compiler for C supports arguments -mvpclmulqdq: YES 00:16:33.784 Compiler for C supports arguments -mavx2: YES 00:16:33.784 Compiler for C supports arguments -mavx: YES 00:16:33.784 Message: lib/net: Defining dependency "net" 00:16:33.784 Message: lib/meter: Defining dependency "meter" 00:16:33.784 Message: lib/ethdev: Defining dependency "ethdev" 00:16:33.784 Message: lib/pci: Defining dependency "pci" 00:16:33.784 Message: lib/cmdline: Defining dependency "cmdline" 00:16:33.784 Message: lib/hash: Defining dependency "hash" 00:16:33.784 Message: lib/timer: Defining dependency "timer" 00:16:33.784 Message: lib/compressdev: Defining dependency "compressdev" 00:16:33.784 Message: lib/cryptodev: Defining dependency "cryptodev" 00:16:33.784 Message: lib/dmadev: Defining dependency "dmadev" 00:16:33.784 Compiler for C supports arguments -Wno-cast-qual: YES 00:16:33.784 Message: lib/power: Defining dependency "power" 00:16:33.784 Message: lib/reorder: Defining dependency "reorder" 00:16:33.784 Message: lib/security: Defining dependency "security" 00:16:33.784 Has header "linux/userfaultfd.h" : YES 00:16:33.784 Has header "linux/vduse.h" : YES 00:16:33.784 Message: lib/vhost: Defining dependency "vhost" 00:16:33.784 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:16:33.784 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:16:33.784 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:16:33.784 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:16:33.784 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:16:33.784 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:16:33.784 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:16:33.784 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:16:33.784 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:16:33.784 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:16:33.784 Program doxygen found: YES (/usr/bin/doxygen) 00:16:33.784 Configuring doxy-api-html.conf using configuration 00:16:33.785 Configuring doxy-api-man.conf using configuration 00:16:33.785 Program mandb found: YES (/usr/bin/mandb) 00:16:33.785 Program sphinx-build found: NO 00:16:33.785 Configuring rte_build_config.h using configuration 00:16:33.785 Message: 00:16:33.785 ================= 00:16:33.785 Applications Enabled 00:16:33.785 ================= 00:16:33.785 00:16:33.785 apps: 00:16:33.785 00:16:33.785 00:16:33.785 Message: 00:16:33.785 ================= 00:16:33.785 Libraries Enabled 00:16:33.785 ================= 00:16:33.785 00:16:33.785 libs: 00:16:33.785 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:16:33.785 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:16:33.785 cryptodev, dmadev, power, reorder, security, vhost, 00:16:33.785 00:16:33.785 Message: 00:16:33.785 =============== 00:16:33.785 Drivers Enabled 00:16:33.785 =============== 00:16:33.785 00:16:33.785 common: 00:16:33.785 00:16:33.785 bus: 00:16:33.785 pci, vdev, 00:16:33.785 mempool: 00:16:33.785 ring, 00:16:33.785 dma: 00:16:33.785 00:16:33.785 net: 00:16:33.785 00:16:33.785 crypto: 00:16:33.785 00:16:33.785 compress: 00:16:33.785 00:16:33.785 vdpa: 00:16:33.785 00:16:33.785 00:16:33.785 Message: 00:16:33.785 ================= 00:16:33.785 Content Skipped 00:16:33.785 ================= 00:16:33.785 00:16:33.785 apps: 00:16:33.785 dumpcap: explicitly disabled via build config 00:16:33.785 graph: explicitly disabled via build config 00:16:33.785 pdump: explicitly disabled via build config 00:16:33.785 proc-info: explicitly disabled via build config 00:16:33.785 test-acl: explicitly disabled via build config 00:16:33.785 test-bbdev: explicitly disabled via build config 00:16:33.785 test-cmdline: explicitly disabled via build config 00:16:33.785 test-compress-perf: explicitly disabled via build config 00:16:33.785 test-crypto-perf: explicitly disabled via build config 00:16:33.785 test-dma-perf: explicitly disabled via build config 00:16:33.785 test-eventdev: explicitly disabled via build config 00:16:33.785 test-fib: explicitly disabled via build config 00:16:33.785 test-flow-perf: explicitly disabled via build config 00:16:33.785 test-gpudev: explicitly disabled via build config 00:16:33.785 test-mldev: explicitly disabled via build config 00:16:33.785 test-pipeline: explicitly disabled via build config 00:16:33.785 test-pmd: explicitly disabled via build config 00:16:33.785 test-regex: explicitly disabled via build config 00:16:33.785 test-sad: explicitly disabled via build config 00:16:33.785 test-security-perf: explicitly disabled via build config 00:16:33.785 00:16:33.785 libs: 00:16:33.785 argparse: explicitly disabled via build config 00:16:33.785 metrics: explicitly disabled via build config 00:16:33.785 acl: explicitly disabled via build config 00:16:33.785 bbdev: explicitly disabled via build config 00:16:33.785 bitratestats: explicitly disabled via build config 00:16:33.785 bpf: explicitly disabled via build config 00:16:33.785 cfgfile: explicitly disabled via build config 00:16:33.785 distributor: explicitly disabled via build config 00:16:33.785 efd: explicitly disabled via build config 00:16:33.785 eventdev: explicitly disabled via build config 00:16:33.785 dispatcher: explicitly disabled via build config 00:16:33.785 gpudev: explicitly disabled via build config 00:16:33.785 gro: explicitly disabled via build config 00:16:33.786 gso: explicitly disabled via build config 00:16:33.786 ip_frag: explicitly disabled via build config 00:16:33.786 jobstats: explicitly disabled via build config 00:16:33.786 latencystats: explicitly disabled via build config 00:16:33.786 lpm: explicitly disabled via build config 00:16:33.786 member: explicitly disabled via build config 00:16:33.786 pcapng: explicitly disabled via build config 00:16:33.786 rawdev: explicitly disabled via build config 00:16:33.786 regexdev: explicitly disabled via build config 00:16:33.786 mldev: explicitly disabled via build config 00:16:33.786 rib: explicitly disabled via build config 00:16:33.786 sched: explicitly disabled via build config 00:16:33.786 stack: explicitly disabled via build config 00:16:33.786 ipsec: explicitly disabled via build config 00:16:33.786 pdcp: explicitly disabled via build config 00:16:33.786 fib: explicitly disabled via build config 00:16:33.786 port: explicitly disabled via build config 00:16:33.786 pdump: explicitly disabled via build config 00:16:33.786 table: explicitly disabled via build config 00:16:33.786 pipeline: explicitly disabled via build config 00:16:33.786 graph: explicitly disabled via build config 00:16:33.786 node: explicitly disabled via build config 00:16:33.786 00:16:33.786 drivers: 00:16:33.786 common/cpt: not in enabled drivers build config 00:16:33.786 common/dpaax: not in enabled drivers build config 00:16:33.786 common/iavf: not in enabled drivers build config 00:16:33.786 common/idpf: not in enabled drivers build config 00:16:33.786 common/ionic: not in enabled drivers build config 00:16:33.786 common/mvep: not in enabled drivers build config 00:16:33.786 common/octeontx: not in enabled drivers build config 00:16:33.786 bus/auxiliary: not in enabled drivers build config 00:16:33.786 bus/cdx: not in enabled drivers build config 00:16:33.786 bus/dpaa: not in enabled drivers build config 00:16:33.786 bus/fslmc: not in enabled drivers build config 00:16:33.786 bus/ifpga: not in enabled drivers build config 00:16:33.786 bus/platform: not in enabled drivers build config 00:16:33.786 bus/uacce: not in enabled drivers build config 00:16:33.786 bus/vmbus: not in enabled drivers build config 00:16:33.786 common/cnxk: not in enabled drivers build config 00:16:33.786 common/mlx5: not in enabled drivers build config 00:16:33.786 common/nfp: not in enabled drivers build config 00:16:33.786 common/nitrox: not in enabled drivers build config 00:16:33.786 common/qat: not in enabled drivers build config 00:16:33.786 common/sfc_efx: not in enabled drivers build config 00:16:33.786 mempool/bucket: not in enabled drivers build config 00:16:33.786 mempool/cnxk: not in enabled drivers build config 00:16:33.786 mempool/dpaa: not in enabled drivers build config 00:16:33.786 mempool/dpaa2: not in enabled drivers build config 00:16:33.786 mempool/octeontx: not in enabled drivers build config 00:16:33.786 mempool/stack: not in enabled drivers build config 00:16:33.786 dma/cnxk: not in enabled drivers build config 00:16:33.786 dma/dpaa: not in enabled drivers build config 00:16:33.786 dma/dpaa2: not in enabled drivers build config 00:16:33.786 dma/hisilicon: not in enabled drivers build config 00:16:33.786 dma/idxd: not in enabled drivers build config 00:16:33.786 dma/ioat: not in enabled drivers build config 00:16:33.786 dma/skeleton: not in enabled drivers build config 00:16:33.786 net/af_packet: not in enabled drivers build config 00:16:33.786 net/af_xdp: not in enabled drivers build config 00:16:33.786 net/ark: not in enabled drivers build config 00:16:33.786 net/atlantic: not in enabled drivers build config 00:16:33.786 net/avp: not in enabled drivers build config 00:16:33.786 net/axgbe: not in enabled drivers build config 00:16:33.786 net/bnx2x: not in enabled drivers build config 00:16:33.786 net/bnxt: not in enabled drivers build config 00:16:33.786 net/bonding: not in enabled drivers build config 00:16:33.786 net/cnxk: not in enabled drivers build config 00:16:33.786 net/cpfl: not in enabled drivers build config 00:16:33.786 net/cxgbe: not in enabled drivers build config 00:16:33.786 net/dpaa: not in enabled drivers build config 00:16:33.786 net/dpaa2: not in enabled drivers build config 00:16:33.786 net/e1000: not in enabled drivers build config 00:16:33.786 net/ena: not in enabled drivers build config 00:16:33.786 net/enetc: not in enabled drivers build config 00:16:33.786 net/enetfec: not in enabled drivers build config 00:16:33.786 net/enic: not in enabled drivers build config 00:16:33.786 net/failsafe: not in enabled drivers build config 00:16:33.786 net/fm10k: not in enabled drivers build config 00:16:33.786 net/gve: not in enabled drivers build config 00:16:33.786 net/hinic: not in enabled drivers build config 00:16:33.786 net/hns3: not in enabled drivers build config 00:16:33.786 net/i40e: not in enabled drivers build config 00:16:33.786 net/iavf: not in enabled drivers build config 00:16:33.786 net/ice: not in enabled drivers build config 00:16:33.786 net/idpf: not in enabled drivers build config 00:16:33.786 net/igc: not in enabled drivers build config 00:16:33.786 net/ionic: not in enabled drivers build config 00:16:33.786 net/ipn3ke: not in enabled drivers build config 00:16:33.787 net/ixgbe: not in enabled drivers build config 00:16:33.787 net/mana: not in enabled drivers build config 00:16:33.787 net/memif: not in enabled drivers build config 00:16:33.787 net/mlx4: not in enabled drivers build config 00:16:33.787 net/mlx5: not in enabled drivers build config 00:16:33.787 net/mvneta: not in enabled drivers build config 00:16:33.787 net/mvpp2: not in enabled drivers build config 00:16:33.787 net/netvsc: not in enabled drivers build config 00:16:33.787 net/nfb: not in enabled drivers build config 00:16:33.787 net/nfp: not in enabled drivers build config 00:16:33.787 net/ngbe: not in enabled drivers build config 00:16:33.787 net/null: not in enabled drivers build config 00:16:33.787 net/octeontx: not in enabled drivers build config 00:16:33.787 net/octeon_ep: not in enabled drivers build config 00:16:33.787 net/pcap: not in enabled drivers build config 00:16:33.787 net/pfe: not in enabled drivers build config 00:16:33.787 net/qede: not in enabled drivers build config 00:16:33.787 net/ring: not in enabled drivers build config 00:16:33.787 net/sfc: not in enabled drivers build config 00:16:33.787 net/softnic: not in enabled drivers build config 00:16:33.787 net/tap: not in enabled drivers build config 00:16:33.787 net/thunderx: not in enabled drivers build config 00:16:33.787 net/txgbe: not in enabled drivers build config 00:16:33.787 net/vdev_netvsc: not in enabled drivers build config 00:16:33.787 net/vhost: not in enabled drivers build config 00:16:33.787 net/virtio: not in enabled drivers build config 00:16:33.787 net/vmxnet3: not in enabled drivers build config 00:16:33.787 raw/*: missing internal dependency, "rawdev" 00:16:33.787 crypto/armv8: not in enabled drivers build config 00:16:33.787 crypto/bcmfs: not in enabled drivers build config 00:16:33.787 crypto/caam_jr: not in enabled drivers build config 00:16:33.787 crypto/ccp: not in enabled drivers build config 00:16:33.787 crypto/cnxk: not in enabled drivers build config 00:16:33.787 crypto/dpaa_sec: not in enabled drivers build config 00:16:33.787 crypto/dpaa2_sec: not in enabled drivers build config 00:16:33.787 crypto/ipsec_mb: not in enabled drivers build config 00:16:33.787 crypto/mlx5: not in enabled drivers build config 00:16:33.787 crypto/mvsam: not in enabled drivers build config 00:16:33.787 crypto/nitrox: not in enabled drivers build config 00:16:33.787 crypto/null: not in enabled drivers build config 00:16:33.787 crypto/octeontx: not in enabled drivers build config 00:16:33.787 crypto/openssl: not in enabled drivers build config 00:16:33.787 crypto/scheduler: not in enabled drivers build config 00:16:33.787 crypto/uadk: not in enabled drivers build config 00:16:33.787 crypto/virtio: not in enabled drivers build config 00:16:33.787 compress/isal: not in enabled drivers build config 00:16:33.787 compress/mlx5: not in enabled drivers build config 00:16:33.787 compress/nitrox: not in enabled drivers build config 00:16:33.787 compress/octeontx: not in enabled drivers build config 00:16:33.787 compress/zlib: not in enabled drivers build config 00:16:33.787 regex/*: missing internal dependency, "regexdev" 00:16:33.787 ml/*: missing internal dependency, "mldev" 00:16:33.787 vdpa/ifc: not in enabled drivers build config 00:16:33.787 vdpa/mlx5: not in enabled drivers build config 00:16:33.787 vdpa/nfp: not in enabled drivers build config 00:16:33.787 vdpa/sfc: not in enabled drivers build config 00:16:33.787 event/*: missing internal dependency, "eventdev" 00:16:33.787 baseband/*: missing internal dependency, "bbdev" 00:16:33.787 gpu/*: missing internal dependency, "gpudev" 00:16:33.787 00:16:33.787 00:16:33.787 Build targets in project: 84 00:16:33.787 00:16:33.787 DPDK 24.03.0 00:16:33.787 00:16:33.787 User defined options 00:16:33.787 buildtype : debug 00:16:33.787 default_library : shared 00:16:33.787 libdir : lib 00:16:33.787 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:33.787 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:16:33.787 c_link_args : 00:16:33.787 cpu_instruction_set: native 00:16:33.787 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:16:33.787 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:16:33.787 enable_docs : false 00:16:33.787 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:16:33.787 enable_kmods : false 00:16:33.787 tests : false 00:16:33.787 00:16:33.787 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:33.787 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:16:33.787 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:16:34.055 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:16:34.055 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:16:34.055 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:16:34.055 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:16:34.055 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:16:34.055 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:16:34.055 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:16:34.055 [9/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:16:34.055 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:16:34.055 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:16:34.055 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:16:34.055 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:16:34.055 [14/267] Linking static target lib/librte_kvargs.a 00:16:34.055 [15/267] Linking static target lib/librte_log.a 00:16:34.055 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:16:34.055 [17/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:16:34.055 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:16:34.055 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:16:34.055 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:16:34.055 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:16:34.055 [22/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:16:34.055 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:16:34.314 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:16:34.314 [25/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:16:34.314 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:16:34.314 [27/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:16:34.314 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:16:34.314 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:16:34.314 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:16:34.314 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:16:34.314 [32/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:16:34.314 [33/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:16:34.314 [34/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:16:34.314 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:16:34.314 [36/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:16:34.314 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:16:34.314 [38/267] Linking static target lib/librte_pci.a 00:16:34.314 [39/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:16:34.314 [40/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:16:34.314 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:16:34.314 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:16:34.314 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:16:34.314 [44/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:16:34.314 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:16:34.314 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:16:34.314 [47/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:16:34.314 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:16:34.314 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:16:34.314 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:16:34.314 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:16:34.314 [52/267] Linking static target lib/librte_telemetry.a 00:16:34.314 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:16:34.314 [54/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:16:34.314 [55/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:16:34.314 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:16:34.314 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:16:34.314 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:16:34.314 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:16:34.314 [60/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:16:34.314 [61/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:16:34.314 [62/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:16:34.314 [63/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:16:34.314 [64/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:16:34.314 [65/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:16:34.314 [66/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:16:34.314 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:16:34.314 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:16:34.314 [69/267] Linking static target lib/librte_meter.a 00:16:34.314 [70/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:16:34.314 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:16:34.314 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:16:34.314 [73/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:16:34.314 [74/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:16:34.314 [75/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:16:34.314 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:16:34.314 [77/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:16:34.314 [78/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:16:34.314 [79/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:16:34.314 [80/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:16:34.314 [81/267] Linking static target lib/librte_timer.a 00:16:34.314 [82/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:16:34.314 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:16:34.314 [84/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:16:34.314 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:16:34.314 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:16:34.314 [87/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:16:34.574 [88/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:16:34.574 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:16:34.574 [90/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.574 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:16:34.575 [92/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:16:34.575 [93/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:16:34.575 [94/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:16:34.575 [95/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.575 [96/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:16:34.575 [97/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:16:34.575 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:16:34.575 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:16:34.575 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:16:34.575 [101/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:16:34.575 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:16:34.575 [103/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:16:34.575 [104/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:16:34.575 [105/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:16:34.575 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:16:34.575 [107/267] Linking static target lib/librte_mempool.a 00:16:34.575 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:16:34.575 [109/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:16:34.575 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:16:34.575 [111/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:16:34.575 [112/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:16:34.575 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:16:34.575 [114/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:16:34.575 [115/267] Linking static target lib/librte_rcu.a 00:16:34.575 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:16:34.575 [117/267] Linking static target lib/librte_security.a 00:16:34.575 [118/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:16:34.575 [119/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:16:34.575 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:16:34.575 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:16:34.575 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:16:34.575 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:16:34.575 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:16:34.575 [125/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:16:34.575 [126/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:16:34.575 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:16:34.575 [128/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:16:34.575 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:16:34.575 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:16:34.575 [131/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.575 [132/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:16:34.575 [133/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:16:34.575 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:16:34.575 [135/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:16:34.575 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:16:34.575 [137/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:16:34.575 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:16:34.575 [139/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:16:34.575 [140/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:16:34.575 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:16:34.575 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:16:34.575 [143/267] Linking static target lib/librte_mbuf.a 00:16:34.575 [144/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:16:34.575 [145/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:16:34.575 [146/267] Linking static target lib/librte_ring.a 00:16:34.575 [147/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:16:34.835 [148/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:16:34.835 [149/267] Linking static target lib/librte_net.a 00:16:34.835 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:16:34.835 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:16:34.835 [152/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:16:34.835 [153/267] Linking static target lib/librte_cmdline.a 00:16:34.835 [154/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:16:34.835 [155/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:16:34.835 [156/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:16:34.835 [157/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:16:34.835 [158/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:16:34.835 [159/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:16:34.835 [160/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.835 [161/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:16:34.835 [162/267] Linking static target lib/librte_dmadev.a 00:16:34.835 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:16:34.835 [164/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:16:34.835 [165/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:16:34.835 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:16:34.835 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:16:34.835 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:16:34.836 [169/267] Linking static target lib/librte_power.a 00:16:34.836 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:16:34.836 [171/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:16:34.836 [172/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:16:34.836 [173/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:16:34.836 [174/267] Linking static target lib/librte_compressdev.a 00:16:34.836 [175/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.836 [176/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:16:34.836 [177/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:16:34.836 [178/267] Linking target lib/librte_log.so.24.1 00:16:34.836 [179/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:16:34.836 [180/267] Linking static target lib/librte_hash.a 00:16:34.836 [181/267] Linking static target lib/librte_eal.a 00:16:34.836 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:16:34.836 [183/267] Linking static target lib/librte_reorder.a 00:16:34.836 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:16:34.836 [185/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:16:34.836 [186/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:16:34.836 [187/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.836 [188/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:16:34.836 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:16:34.836 [190/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:16:34.836 [191/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:16:35.097 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:16:35.097 [193/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.097 [194/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:16:35.097 [195/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:16:35.097 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:35.097 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:35.097 [198/267] Linking target lib/librte_kvargs.so.24.1 00:16:35.097 [199/267] Linking static target lib/librte_cryptodev.a 00:16:35.097 [200/267] Linking static target drivers/librte_mempool_ring.a 00:16:35.097 [201/267] Linking target lib/librte_telemetry.so.24.1 00:16:35.097 [202/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:35.097 [203/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:35.097 [204/267] Linking static target drivers/librte_bus_vdev.a 00:16:35.097 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.097 [206/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.097 [207/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:16:35.097 [208/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:16:35.097 [209/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:35.097 [210/267] Linking static target drivers/librte_bus_pci.a 00:16:35.097 [211/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:35.097 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:16:35.098 [213/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.359 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.359 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.359 [216/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.620 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.620 [218/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.620 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:16:35.620 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.620 [221/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:16:35.620 [222/267] Linking static target lib/librte_ethdev.a 00:16:35.881 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.881 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:16:35.881 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.142 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.715 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:16:36.715 [228/267] Linking static target lib/librte_vhost.a 00:16:37.300 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:39.285 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:45.878 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:46.450 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:16:46.450 [233/267] Linking target lib/librte_eal.so.24.1 00:16:46.711 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:16:46.711 [235/267] Linking target lib/librte_pci.so.24.1 00:16:46.711 [236/267] Linking target lib/librte_dmadev.so.24.1 00:16:46.711 [237/267] Linking target lib/librte_ring.so.24.1 00:16:46.711 [238/267] Linking target lib/librte_meter.so.24.1 00:16:46.711 [239/267] Linking target lib/librte_timer.so.24.1 00:16:46.711 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:16:46.971 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:16:46.971 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:16:46.971 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:16:46.971 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:16:46.971 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:16:46.971 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:16:46.971 [247/267] Linking target lib/librte_rcu.so.24.1 00:16:46.971 [248/267] Linking target lib/librte_mempool.so.24.1 00:16:47.232 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:16:47.232 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:16:47.232 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:16:47.232 [252/267] Linking target lib/librte_mbuf.so.24.1 00:16:47.232 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:16:47.493 [254/267] Linking target lib/librte_reorder.so.24.1 00:16:47.493 [255/267] Linking target lib/librte_compressdev.so.24.1 00:16:47.493 [256/267] Linking target lib/librte_net.so.24.1 00:16:47.493 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:16:47.493 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:16:47.493 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:16:47.493 [260/267] Linking target lib/librte_cmdline.so.24.1 00:16:47.493 [261/267] Linking target lib/librte_security.so.24.1 00:16:47.494 [262/267] Linking target lib/librte_hash.so.24.1 00:16:47.494 [263/267] Linking target lib/librte_ethdev.so.24.1 00:16:47.755 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:16:47.755 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:16:47.755 [266/267] Linking target lib/librte_power.so.24.1 00:16:47.755 [267/267] Linking target lib/librte_vhost.so.24.1 00:16:47.755 INFO: autodetecting backend as ninja 00:16:47.755 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:16:49.139 CC lib/ut/ut.o 00:16:49.139 CC lib/log/log.o 00:16:49.139 CC lib/log/log_flags.o 00:16:49.139 CC lib/ut_mock/mock.o 00:16:49.139 CC lib/log/log_deprecated.o 00:16:49.139 LIB libspdk_ut.a 00:16:49.139 SO libspdk_ut.so.2.0 00:16:49.139 LIB libspdk_ut_mock.a 00:16:49.139 LIB libspdk_log.a 00:16:49.139 SO libspdk_ut_mock.so.6.0 00:16:49.139 SO libspdk_log.so.7.0 00:16:49.139 SYMLINK libspdk_ut.so 00:16:49.399 SYMLINK libspdk_ut_mock.so 00:16:49.399 SYMLINK libspdk_log.so 00:16:49.659 CC lib/ioat/ioat.o 00:16:49.659 CC lib/util/base64.o 00:16:49.659 CC lib/util/bit_array.o 00:16:49.659 CC lib/util/cpuset.o 00:16:49.659 CC lib/util/crc16.o 00:16:49.659 CC lib/util/crc32.o 00:16:49.659 CC lib/util/crc32c.o 00:16:49.659 CC lib/dma/dma.o 00:16:49.659 CXX lib/trace_parser/trace.o 00:16:49.659 CC lib/util/crc32_ieee.o 00:16:49.659 CC lib/util/crc64.o 00:16:49.659 CC lib/util/dif.o 00:16:49.659 CC lib/util/fd.o 00:16:49.659 CC lib/util/file.o 00:16:49.659 CC lib/util/hexlify.o 00:16:49.659 CC lib/util/iov.o 00:16:49.659 CC lib/util/math.o 00:16:49.659 CC lib/util/pipe.o 00:16:49.659 CC lib/util/strerror_tls.o 00:16:49.659 CC lib/util/string.o 00:16:49.659 CC lib/util/uuid.o 00:16:49.659 CC lib/util/fd_group.o 00:16:49.659 CC lib/util/xor.o 00:16:49.659 CC lib/util/zipf.o 00:16:49.919 CC lib/vfio_user/host/vfio_user.o 00:16:49.919 CC lib/vfio_user/host/vfio_user_pci.o 00:16:49.919 LIB libspdk_dma.a 00:16:49.919 LIB libspdk_ioat.a 00:16:49.919 SO libspdk_ioat.so.7.0 00:16:49.919 SO libspdk_dma.so.4.0 00:16:49.919 SYMLINK libspdk_dma.so 00:16:49.919 SYMLINK libspdk_ioat.so 00:16:49.919 LIB libspdk_vfio_user.a 00:16:50.182 SO libspdk_vfio_user.so.5.0 00:16:50.182 LIB libspdk_util.a 00:16:50.182 SYMLINK libspdk_vfio_user.so 00:16:50.182 SO libspdk_util.so.9.0 00:16:50.443 SYMLINK libspdk_util.so 00:16:50.443 LIB libspdk_trace_parser.a 00:16:50.443 SO libspdk_trace_parser.so.5.0 00:16:50.703 SYMLINK libspdk_trace_parser.so 00:16:50.703 CC lib/vmd/vmd.o 00:16:50.703 CC lib/vmd/led.o 00:16:50.703 CC lib/json/json_parse.o 00:16:50.703 CC lib/json/json_util.o 00:16:50.703 CC lib/rdma/common.o 00:16:50.703 CC lib/json/json_write.o 00:16:50.703 CC lib/rdma/rdma_verbs.o 00:16:50.703 CC lib/conf/conf.o 00:16:50.703 CC lib/env_dpdk/env.o 00:16:50.703 CC lib/env_dpdk/memory.o 00:16:50.703 CC lib/env_dpdk/pci.o 00:16:50.703 CC lib/idxd/idxd.o 00:16:50.703 CC lib/env_dpdk/init.o 00:16:50.703 CC lib/idxd/idxd_user.o 00:16:50.703 CC lib/env_dpdk/threads.o 00:16:50.703 CC lib/env_dpdk/pci_vmd.o 00:16:50.703 CC lib/env_dpdk/pci_ioat.o 00:16:50.703 CC lib/idxd/idxd_kernel.o 00:16:50.703 CC lib/env_dpdk/pci_virtio.o 00:16:50.703 CC lib/env_dpdk/pci_idxd.o 00:16:50.703 CC lib/env_dpdk/pci_event.o 00:16:50.703 CC lib/env_dpdk/sigbus_handler.o 00:16:50.703 CC lib/env_dpdk/pci_dpdk.o 00:16:50.703 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:50.703 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:50.963 LIB libspdk_conf.a 00:16:50.964 LIB libspdk_json.a 00:16:50.964 SO libspdk_conf.so.6.0 00:16:50.964 LIB libspdk_rdma.a 00:16:50.964 SO libspdk_json.so.6.0 00:16:50.964 SO libspdk_rdma.so.6.0 00:16:50.964 SYMLINK libspdk_conf.so 00:16:51.224 SYMLINK libspdk_json.so 00:16:51.224 SYMLINK libspdk_rdma.so 00:16:51.224 LIB libspdk_idxd.a 00:16:51.224 LIB libspdk_vmd.a 00:16:51.224 SO libspdk_idxd.so.12.0 00:16:51.224 SO libspdk_vmd.so.6.0 00:16:51.224 SYMLINK libspdk_idxd.so 00:16:51.485 SYMLINK libspdk_vmd.so 00:16:51.485 CC lib/jsonrpc/jsonrpc_server.o 00:16:51.485 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:51.485 CC lib/jsonrpc/jsonrpc_client.o 00:16:51.485 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:51.746 LIB libspdk_jsonrpc.a 00:16:51.746 SO libspdk_jsonrpc.so.6.0 00:16:51.746 SYMLINK libspdk_jsonrpc.so 00:16:52.007 LIB libspdk_env_dpdk.a 00:16:52.007 SO libspdk_env_dpdk.so.14.1 00:16:52.267 SYMLINK libspdk_env_dpdk.so 00:16:52.267 CC lib/rpc/rpc.o 00:16:52.528 LIB libspdk_rpc.a 00:16:52.528 SO libspdk_rpc.so.6.0 00:16:52.528 SYMLINK libspdk_rpc.so 00:16:52.788 CC lib/notify/notify.o 00:16:52.788 CC lib/notify/notify_rpc.o 00:16:52.788 CC lib/keyring/keyring.o 00:16:52.788 CC lib/keyring/keyring_rpc.o 00:16:52.788 CC lib/trace/trace.o 00:16:52.788 CC lib/trace/trace_flags.o 00:16:52.788 CC lib/trace/trace_rpc.o 00:16:53.049 LIB libspdk_notify.a 00:16:53.049 SO libspdk_notify.so.6.0 00:16:53.049 LIB libspdk_keyring.a 00:16:53.049 SO libspdk_keyring.so.1.0 00:16:53.049 LIB libspdk_trace.a 00:16:53.049 SYMLINK libspdk_notify.so 00:16:53.309 SO libspdk_trace.so.10.0 00:16:53.309 SYMLINK libspdk_keyring.so 00:16:53.309 SYMLINK libspdk_trace.so 00:16:53.570 CC lib/thread/thread.o 00:16:53.570 CC lib/thread/iobuf.o 00:16:53.570 CC lib/sock/sock.o 00:16:53.570 CC lib/sock/sock_rpc.o 00:16:54.141 LIB libspdk_sock.a 00:16:54.141 SO libspdk_sock.so.9.0 00:16:54.141 SYMLINK libspdk_sock.so 00:16:54.402 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:54.402 CC lib/nvme/nvme_ctrlr.o 00:16:54.402 CC lib/nvme/nvme_fabric.o 00:16:54.402 CC lib/nvme/nvme_ns_cmd.o 00:16:54.402 CC lib/nvme/nvme_ns.o 00:16:54.402 CC lib/nvme/nvme_pcie_common.o 00:16:54.402 CC lib/nvme/nvme_pcie.o 00:16:54.402 CC lib/nvme/nvme_qpair.o 00:16:54.402 CC lib/nvme/nvme.o 00:16:54.402 CC lib/nvme/nvme_discovery.o 00:16:54.402 CC lib/nvme/nvme_quirks.o 00:16:54.402 CC lib/nvme/nvme_transport.o 00:16:54.402 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:54.402 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:54.402 CC lib/nvme/nvme_tcp.o 00:16:54.402 CC lib/nvme/nvme_opal.o 00:16:54.402 CC lib/nvme/nvme_io_msg.o 00:16:54.402 CC lib/nvme/nvme_poll_group.o 00:16:54.402 CC lib/nvme/nvme_zns.o 00:16:54.402 CC lib/nvme/nvme_stubs.o 00:16:54.402 CC lib/nvme/nvme_auth.o 00:16:54.402 CC lib/nvme/nvme_cuse.o 00:16:54.402 CC lib/nvme/nvme_vfio_user.o 00:16:54.402 CC lib/nvme/nvme_rdma.o 00:16:54.975 LIB libspdk_thread.a 00:16:54.975 SO libspdk_thread.so.10.0 00:16:54.975 SYMLINK libspdk_thread.so 00:16:55.236 CC lib/init/json_config.o 00:16:55.236 CC lib/init/subsystem.o 00:16:55.236 CC lib/init/subsystem_rpc.o 00:16:55.236 CC lib/init/rpc.o 00:16:55.236 CC lib/virtio/virtio.o 00:16:55.236 CC lib/virtio/virtio_vhost_user.o 00:16:55.236 CC lib/virtio/virtio_vfio_user.o 00:16:55.236 CC lib/virtio/virtio_pci.o 00:16:55.236 CC lib/vfu_tgt/tgt_endpoint.o 00:16:55.236 CC lib/vfu_tgt/tgt_rpc.o 00:16:55.236 CC lib/blob/blobstore.o 00:16:55.236 CC lib/blob/request.o 00:16:55.236 CC lib/blob/zeroes.o 00:16:55.236 CC lib/blob/blob_bs_dev.o 00:16:55.236 CC lib/accel/accel.o 00:16:55.236 CC lib/accel/accel_rpc.o 00:16:55.236 CC lib/accel/accel_sw.o 00:16:55.497 LIB libspdk_init.a 00:16:55.497 SO libspdk_init.so.5.0 00:16:55.497 LIB libspdk_vfu_tgt.a 00:16:55.497 LIB libspdk_virtio.a 00:16:55.759 SO libspdk_vfu_tgt.so.3.0 00:16:55.759 SYMLINK libspdk_init.so 00:16:55.759 SO libspdk_virtio.so.7.0 00:16:55.759 SYMLINK libspdk_vfu_tgt.so 00:16:55.759 SYMLINK libspdk_virtio.so 00:16:56.020 CC lib/event/app.o 00:16:56.020 CC lib/event/reactor.o 00:16:56.020 CC lib/event/log_rpc.o 00:16:56.020 CC lib/event/app_rpc.o 00:16:56.020 CC lib/event/scheduler_static.o 00:16:56.282 LIB libspdk_accel.a 00:16:56.282 LIB libspdk_nvme.a 00:16:56.282 SO libspdk_accel.so.15.0 00:16:56.282 SYMLINK libspdk_accel.so 00:16:56.282 SO libspdk_nvme.so.13.1 00:16:56.282 LIB libspdk_event.a 00:16:56.545 SO libspdk_event.so.13.1 00:16:56.545 SYMLINK libspdk_event.so 00:16:56.545 CC lib/bdev/bdev.o 00:16:56.545 CC lib/bdev/bdev_rpc.o 00:16:56.545 CC lib/bdev/bdev_zone.o 00:16:56.545 CC lib/bdev/part.o 00:16:56.545 CC lib/bdev/scsi_nvme.o 00:16:56.545 SYMLINK libspdk_nvme.so 00:16:57.932 LIB libspdk_blob.a 00:16:57.932 SO libspdk_blob.so.11.0 00:16:57.932 SYMLINK libspdk_blob.so 00:16:58.504 CC lib/lvol/lvol.o 00:16:58.504 CC lib/blobfs/blobfs.o 00:16:58.504 CC lib/blobfs/tree.o 00:16:59.076 LIB libspdk_bdev.a 00:16:59.076 SO libspdk_bdev.so.15.0 00:16:59.076 SYMLINK libspdk_bdev.so 00:16:59.076 LIB libspdk_blobfs.a 00:16:59.076 SO libspdk_blobfs.so.10.0 00:16:59.076 LIB libspdk_lvol.a 00:16:59.076 SYMLINK libspdk_blobfs.so 00:16:59.076 SO libspdk_lvol.so.10.0 00:16:59.337 SYMLINK libspdk_lvol.so 00:16:59.337 CC lib/nvmf/ctrlr.o 00:16:59.337 CC lib/scsi/dev.o 00:16:59.337 CC lib/nvmf/ctrlr_discovery.o 00:16:59.337 CC lib/nvmf/ctrlr_bdev.o 00:16:59.337 CC lib/nvmf/subsystem.o 00:16:59.337 CC lib/scsi/lun.o 00:16:59.337 CC lib/nvmf/nvmf.o 00:16:59.337 CC lib/nvmf/nvmf_rpc.o 00:16:59.337 CC lib/scsi/port.o 00:16:59.337 CC lib/nvmf/transport.o 00:16:59.337 CC lib/scsi/scsi.o 00:16:59.337 CC lib/scsi/scsi_bdev.o 00:16:59.337 CC lib/nvmf/tcp.o 00:16:59.337 CC lib/nvmf/stubs.o 00:16:59.337 CC lib/scsi/scsi_pr.o 00:16:59.337 CC lib/nvmf/mdns_server.o 00:16:59.337 CC lib/scsi/scsi_rpc.o 00:16:59.337 CC lib/scsi/task.o 00:16:59.337 CC lib/nvmf/vfio_user.o 00:16:59.337 CC lib/nvmf/rdma.o 00:16:59.337 CC lib/nvmf/auth.o 00:16:59.337 CC lib/ublk/ublk.o 00:16:59.337 CC lib/ublk/ublk_rpc.o 00:16:59.337 CC lib/nbd/nbd.o 00:16:59.337 CC lib/nbd/nbd_rpc.o 00:16:59.337 CC lib/ftl/ftl_core.o 00:16:59.337 CC lib/ftl/ftl_init.o 00:16:59.337 CC lib/ftl/ftl_layout.o 00:16:59.337 CC lib/ftl/ftl_debug.o 00:16:59.337 CC lib/ftl/ftl_io.o 00:16:59.337 CC lib/ftl/ftl_sb.o 00:16:59.337 CC lib/ftl/ftl_l2p.o 00:16:59.337 CC lib/ftl/ftl_l2p_flat.o 00:16:59.337 CC lib/ftl/ftl_nv_cache.o 00:16:59.337 CC lib/ftl/ftl_band_ops.o 00:16:59.337 CC lib/ftl/ftl_writer.o 00:16:59.337 CC lib/ftl/ftl_band.o 00:16:59.337 CC lib/ftl/ftl_rq.o 00:16:59.337 CC lib/ftl/ftl_reloc.o 00:16:59.337 CC lib/ftl/ftl_l2p_cache.o 00:16:59.337 CC lib/ftl/ftl_p2l.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:59.337 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:59.596 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:59.597 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:59.597 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:59.597 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:59.597 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:59.597 CC lib/ftl/utils/ftl_md.o 00:16:59.597 CC lib/ftl/utils/ftl_bitmap.o 00:16:59.597 CC lib/ftl/utils/ftl_property.o 00:16:59.597 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:59.597 CC lib/ftl/utils/ftl_conf.o 00:16:59.597 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:59.597 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:59.597 CC lib/ftl/utils/ftl_mempool.o 00:16:59.597 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:59.597 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:59.597 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:59.597 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:59.597 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:59.597 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:59.597 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:59.597 CC lib/ftl/base/ftl_base_bdev.o 00:16:59.597 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:59.597 CC lib/ftl/base/ftl_base_dev.o 00:16:59.597 CC lib/ftl/ftl_trace.o 00:17:00.167 LIB libspdk_nbd.a 00:17:00.167 LIB libspdk_scsi.a 00:17:00.167 SO libspdk_nbd.so.7.0 00:17:00.167 SO libspdk_scsi.so.9.0 00:17:00.167 SYMLINK libspdk_nbd.so 00:17:00.167 SYMLINK libspdk_scsi.so 00:17:00.167 LIB libspdk_ublk.a 00:17:00.167 SO libspdk_ublk.so.3.0 00:17:00.427 SYMLINK libspdk_ublk.so 00:17:00.427 LIB libspdk_ftl.a 00:17:00.427 CC lib/iscsi/conn.o 00:17:00.427 CC lib/iscsi/init_grp.o 00:17:00.427 CC lib/iscsi/iscsi.o 00:17:00.427 CC lib/vhost/vhost.o 00:17:00.427 CC lib/iscsi/portal_grp.o 00:17:00.427 CC lib/iscsi/md5.o 00:17:00.427 CC lib/vhost/vhost_rpc.o 00:17:00.427 CC lib/iscsi/param.o 00:17:00.427 CC lib/iscsi/iscsi_subsystem.o 00:17:00.427 CC lib/vhost/vhost_scsi.o 00:17:00.427 CC lib/iscsi/tgt_node.o 00:17:00.427 CC lib/vhost/vhost_blk.o 00:17:00.427 CC lib/vhost/rte_vhost_user.o 00:17:00.427 CC lib/iscsi/iscsi_rpc.o 00:17:00.427 CC lib/iscsi/task.o 00:17:00.688 SO libspdk_ftl.so.9.0 00:17:00.948 SYMLINK libspdk_ftl.so 00:17:01.209 LIB libspdk_nvmf.a 00:17:01.469 SO libspdk_nvmf.so.18.1 00:17:01.469 LIB libspdk_vhost.a 00:17:01.469 SO libspdk_vhost.so.8.0 00:17:01.469 SYMLINK libspdk_nvmf.so 00:17:01.730 SYMLINK libspdk_vhost.so 00:17:01.730 LIB libspdk_iscsi.a 00:17:01.730 SO libspdk_iscsi.so.8.0 00:17:01.991 SYMLINK libspdk_iscsi.so 00:17:02.252 CC module/vfu_device/vfu_virtio_blk.o 00:17:02.252 CC module/vfu_device/vfu_virtio.o 00:17:02.252 CC module/vfu_device/vfu_virtio_scsi.o 00:17:02.252 CC module/vfu_device/vfu_virtio_rpc.o 00:17:02.512 CC module/env_dpdk/env_dpdk_rpc.o 00:17:02.512 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:17:02.512 CC module/accel/error/accel_error.o 00:17:02.512 CC module/accel/error/accel_error_rpc.o 00:17:02.512 CC module/accel/dsa/accel_dsa.o 00:17:02.512 LIB libspdk_env_dpdk_rpc.a 00:17:02.512 CC module/blob/bdev/blob_bdev.o 00:17:02.512 CC module/sock/posix/posix.o 00:17:02.512 CC module/accel/dsa/accel_dsa_rpc.o 00:17:02.512 CC module/accel/iaa/accel_iaa.o 00:17:02.512 CC module/accel/iaa/accel_iaa_rpc.o 00:17:02.512 CC module/scheduler/dynamic/scheduler_dynamic.o 00:17:02.512 CC module/scheduler/gscheduler/gscheduler.o 00:17:02.512 CC module/accel/ioat/accel_ioat.o 00:17:02.512 CC module/accel/ioat/accel_ioat_rpc.o 00:17:02.512 CC module/keyring/file/keyring.o 00:17:02.512 CC module/keyring/file/keyring_rpc.o 00:17:02.512 CC module/keyring/linux/keyring.o 00:17:02.512 CC module/keyring/linux/keyring_rpc.o 00:17:02.512 SO libspdk_env_dpdk_rpc.so.6.0 00:17:02.772 SYMLINK libspdk_env_dpdk_rpc.so 00:17:02.772 LIB libspdk_scheduler_gscheduler.a 00:17:02.772 LIB libspdk_scheduler_dpdk_governor.a 00:17:02.772 LIB libspdk_accel_error.a 00:17:02.772 LIB libspdk_keyring_file.a 00:17:02.772 LIB libspdk_keyring_linux.a 00:17:02.772 SO libspdk_scheduler_dpdk_governor.so.4.0 00:17:02.772 SO libspdk_scheduler_gscheduler.so.4.0 00:17:02.772 LIB libspdk_scheduler_dynamic.a 00:17:02.772 SO libspdk_keyring_file.so.1.0 00:17:02.772 SO libspdk_accel_error.so.2.0 00:17:02.772 LIB libspdk_accel_ioat.a 00:17:02.772 SO libspdk_keyring_linux.so.1.0 00:17:02.772 LIB libspdk_accel_iaa.a 00:17:02.772 SYMLINK libspdk_scheduler_dpdk_governor.so 00:17:02.772 SO libspdk_scheduler_dynamic.so.4.0 00:17:02.772 SO libspdk_accel_ioat.so.6.0 00:17:02.772 LIB libspdk_accel_dsa.a 00:17:02.772 SYMLINK libspdk_scheduler_gscheduler.so 00:17:02.772 SYMLINK libspdk_keyring_file.so 00:17:02.772 SO libspdk_accel_iaa.so.3.0 00:17:02.772 LIB libspdk_blob_bdev.a 00:17:02.772 SO libspdk_accel_dsa.so.5.0 00:17:02.772 SYMLINK libspdk_accel_error.so 00:17:02.772 SYMLINK libspdk_keyring_linux.so 00:17:02.772 SYMLINK libspdk_scheduler_dynamic.so 00:17:02.772 SO libspdk_blob_bdev.so.11.0 00:17:03.064 SYMLINK libspdk_accel_ioat.so 00:17:03.064 SYMLINK libspdk_accel_iaa.so 00:17:03.064 LIB libspdk_vfu_device.a 00:17:03.064 SYMLINK libspdk_accel_dsa.so 00:17:03.064 SYMLINK libspdk_blob_bdev.so 00:17:03.064 SO libspdk_vfu_device.so.3.0 00:17:03.064 SYMLINK libspdk_vfu_device.so 00:17:03.360 LIB libspdk_sock_posix.a 00:17:03.360 SO libspdk_sock_posix.so.6.0 00:17:03.360 SYMLINK libspdk_sock_posix.so 00:17:03.360 CC module/bdev/delay/vbdev_delay.o 00:17:03.360 CC module/bdev/passthru/vbdev_passthru.o 00:17:03.360 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:17:03.360 CC module/bdev/delay/vbdev_delay_rpc.o 00:17:03.360 CC module/bdev/nvme/bdev_nvme.o 00:17:03.360 CC module/bdev/nvme/bdev_nvme_rpc.o 00:17:03.360 CC module/bdev/nvme/nvme_rpc.o 00:17:03.360 CC module/bdev/nvme/bdev_mdns_client.o 00:17:03.619 CC module/bdev/nvme/vbdev_opal.o 00:17:03.619 CC module/bdev/nvme/vbdev_opal_rpc.o 00:17:03.619 CC module/bdev/error/vbdev_error.o 00:17:03.619 CC module/bdev/error/vbdev_error_rpc.o 00:17:03.619 CC module/bdev/lvol/vbdev_lvol.o 00:17:03.619 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:17:03.619 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:17:03.619 CC module/bdev/malloc/bdev_malloc_rpc.o 00:17:03.619 CC module/bdev/gpt/gpt.o 00:17:03.619 CC module/bdev/malloc/bdev_malloc.o 00:17:03.619 CC module/bdev/split/vbdev_split.o 00:17:03.619 CC module/bdev/split/vbdev_split_rpc.o 00:17:03.619 CC module/bdev/gpt/vbdev_gpt.o 00:17:03.619 CC module/bdev/virtio/bdev_virtio_scsi.o 00:17:03.619 CC module/bdev/virtio/bdev_virtio_blk.o 00:17:03.619 CC module/bdev/null/bdev_null.o 00:17:03.619 CC module/bdev/virtio/bdev_virtio_rpc.o 00:17:03.619 CC module/bdev/null/bdev_null_rpc.o 00:17:03.619 CC module/blobfs/bdev/blobfs_bdev.o 00:17:03.619 CC module/bdev/aio/bdev_aio_rpc.o 00:17:03.619 CC module/bdev/aio/bdev_aio.o 00:17:03.619 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:17:03.619 CC module/bdev/iscsi/bdev_iscsi.o 00:17:03.619 CC module/bdev/ftl/bdev_ftl.o 00:17:03.619 CC module/bdev/raid/bdev_raid.o 00:17:03.619 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:17:03.619 CC module/bdev/ftl/bdev_ftl_rpc.o 00:17:03.619 CC module/bdev/raid/bdev_raid_rpc.o 00:17:03.619 CC module/bdev/raid/bdev_raid_sb.o 00:17:03.619 CC module/bdev/raid/raid0.o 00:17:03.619 CC module/bdev/raid/raid1.o 00:17:03.619 CC module/bdev/raid/concat.o 00:17:03.619 CC module/bdev/zone_block/vbdev_zone_block.o 00:17:03.619 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:17:03.619 LIB libspdk_blobfs_bdev.a 00:17:03.619 SO libspdk_blobfs_bdev.so.6.0 00:17:03.619 LIB libspdk_bdev_split.a 00:17:03.880 LIB libspdk_bdev_null.a 00:17:03.880 LIB libspdk_bdev_passthru.a 00:17:03.880 SYMLINK libspdk_blobfs_bdev.so 00:17:03.880 SO libspdk_bdev_split.so.6.0 00:17:03.880 LIB libspdk_bdev_gpt.a 00:17:03.880 SO libspdk_bdev_passthru.so.6.0 00:17:03.880 SO libspdk_bdev_null.so.6.0 00:17:03.880 LIB libspdk_bdev_error.a 00:17:03.880 SO libspdk_bdev_gpt.so.6.0 00:17:03.880 LIB libspdk_bdev_ftl.a 00:17:03.880 LIB libspdk_bdev_aio.a 00:17:03.880 LIB libspdk_bdev_delay.a 00:17:03.880 SYMLINK libspdk_bdev_split.so 00:17:03.880 SO libspdk_bdev_error.so.6.0 00:17:03.880 SO libspdk_bdev_aio.so.6.0 00:17:03.880 SO libspdk_bdev_ftl.so.6.0 00:17:03.880 SYMLINK libspdk_bdev_passthru.so 00:17:03.880 SYMLINK libspdk_bdev_null.so 00:17:03.880 SO libspdk_bdev_delay.so.6.0 00:17:03.880 LIB libspdk_bdev_zone_block.a 00:17:03.880 SYMLINK libspdk_bdev_gpt.so 00:17:03.880 LIB libspdk_bdev_iscsi.a 00:17:03.880 LIB libspdk_bdev_malloc.a 00:17:03.880 SYMLINK libspdk_bdev_error.so 00:17:03.880 SO libspdk_bdev_zone_block.so.6.0 00:17:03.880 SYMLINK libspdk_bdev_aio.so 00:17:03.880 SYMLINK libspdk_bdev_delay.so 00:17:03.880 SO libspdk_bdev_iscsi.so.6.0 00:17:03.880 SYMLINK libspdk_bdev_ftl.so 00:17:03.880 SO libspdk_bdev_malloc.so.6.0 00:17:03.880 SYMLINK libspdk_bdev_zone_block.so 00:17:03.880 LIB libspdk_bdev_virtio.a 00:17:03.880 LIB libspdk_bdev_lvol.a 00:17:04.142 SYMLINK libspdk_bdev_iscsi.so 00:17:04.142 SYMLINK libspdk_bdev_malloc.so 00:17:04.142 SO libspdk_bdev_lvol.so.6.0 00:17:04.142 SO libspdk_bdev_virtio.so.6.0 00:17:04.142 SYMLINK libspdk_bdev_virtio.so 00:17:04.142 SYMLINK libspdk_bdev_lvol.so 00:17:04.403 LIB libspdk_bdev_raid.a 00:17:04.403 SO libspdk_bdev_raid.so.6.0 00:17:04.403 SYMLINK libspdk_bdev_raid.so 00:17:05.345 LIB libspdk_bdev_nvme.a 00:17:05.345 SO libspdk_bdev_nvme.so.7.0 00:17:05.606 SYMLINK libspdk_bdev_nvme.so 00:17:06.177 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:17:06.177 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:17:06.177 CC module/event/subsystems/iobuf/iobuf.o 00:17:06.177 CC module/event/subsystems/keyring/keyring.o 00:17:06.177 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:17:06.177 CC module/event/subsystems/vmd/vmd.o 00:17:06.177 CC module/event/subsystems/vmd/vmd_rpc.o 00:17:06.177 CC module/event/subsystems/sock/sock.o 00:17:06.177 CC module/event/subsystems/scheduler/scheduler.o 00:17:06.438 LIB libspdk_event_vfu_tgt.a 00:17:06.438 LIB libspdk_event_vmd.a 00:17:06.438 LIB libspdk_event_vhost_blk.a 00:17:06.438 LIB libspdk_event_keyring.a 00:17:06.438 LIB libspdk_event_sock.a 00:17:06.438 LIB libspdk_event_iobuf.a 00:17:06.438 LIB libspdk_event_scheduler.a 00:17:06.438 SO libspdk_event_vfu_tgt.so.3.0 00:17:06.438 SO libspdk_event_vhost_blk.so.3.0 00:17:06.438 SO libspdk_event_keyring.so.1.0 00:17:06.438 SO libspdk_event_vmd.so.6.0 00:17:06.438 SO libspdk_event_sock.so.5.0 00:17:06.438 SO libspdk_event_iobuf.so.3.0 00:17:06.438 SYMLINK libspdk_event_vfu_tgt.so 00:17:06.438 SO libspdk_event_scheduler.so.4.0 00:17:06.438 SYMLINK libspdk_event_vhost_blk.so 00:17:06.438 SYMLINK libspdk_event_sock.so 00:17:06.438 SYMLINK libspdk_event_keyring.so 00:17:06.438 SYMLINK libspdk_event_vmd.so 00:17:06.438 SYMLINK libspdk_event_iobuf.so 00:17:06.438 SYMLINK libspdk_event_scheduler.so 00:17:07.008 CC module/event/subsystems/accel/accel.o 00:17:07.008 LIB libspdk_event_accel.a 00:17:07.008 SO libspdk_event_accel.so.6.0 00:17:07.008 SYMLINK libspdk_event_accel.so 00:17:07.579 CC module/event/subsystems/bdev/bdev.o 00:17:07.579 LIB libspdk_event_bdev.a 00:17:07.579 SO libspdk_event_bdev.so.6.0 00:17:07.840 SYMLINK libspdk_event_bdev.so 00:17:08.100 CC module/event/subsystems/nbd/nbd.o 00:17:08.100 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:17:08.100 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:17:08.100 CC module/event/subsystems/scsi/scsi.o 00:17:08.100 CC module/event/subsystems/ublk/ublk.o 00:17:08.360 LIB libspdk_event_nbd.a 00:17:08.360 LIB libspdk_event_scsi.a 00:17:08.360 LIB libspdk_event_ublk.a 00:17:08.360 SO libspdk_event_nbd.so.6.0 00:17:08.360 SO libspdk_event_scsi.so.6.0 00:17:08.360 SO libspdk_event_ublk.so.3.0 00:17:08.360 LIB libspdk_event_nvmf.a 00:17:08.360 SYMLINK libspdk_event_nbd.so 00:17:08.360 SYMLINK libspdk_event_scsi.so 00:17:08.360 SO libspdk_event_nvmf.so.6.0 00:17:08.360 SYMLINK libspdk_event_ublk.so 00:17:08.360 SYMLINK libspdk_event_nvmf.so 00:17:08.621 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:17:08.621 CC module/event/subsystems/iscsi/iscsi.o 00:17:08.881 LIB libspdk_event_vhost_scsi.a 00:17:08.881 LIB libspdk_event_iscsi.a 00:17:08.881 SO libspdk_event_vhost_scsi.so.3.0 00:17:08.881 SO libspdk_event_iscsi.so.6.0 00:17:09.143 SYMLINK libspdk_event_vhost_scsi.so 00:17:09.143 SYMLINK libspdk_event_iscsi.so 00:17:09.143 SO libspdk.so.6.0 00:17:09.143 SYMLINK libspdk.so 00:17:09.717 CC app/spdk_lspci/spdk_lspci.o 00:17:09.717 TEST_HEADER include/spdk/accel.h 00:17:09.717 TEST_HEADER include/spdk/accel_module.h 00:17:09.717 TEST_HEADER include/spdk/assert.h 00:17:09.717 TEST_HEADER include/spdk/barrier.h 00:17:09.717 TEST_HEADER include/spdk/bdev_module.h 00:17:09.717 TEST_HEADER include/spdk/bit_array.h 00:17:09.717 TEST_HEADER include/spdk/bdev_zone.h 00:17:09.717 TEST_HEADER include/spdk/base64.h 00:17:09.717 TEST_HEADER include/spdk/bit_pool.h 00:17:09.717 TEST_HEADER include/spdk/bdev.h 00:17:09.717 TEST_HEADER include/spdk/blobfs_bdev.h 00:17:09.717 TEST_HEADER include/spdk/blob_bdev.h 00:17:09.717 TEST_HEADER include/spdk/blob.h 00:17:09.717 TEST_HEADER include/spdk/config.h 00:17:09.717 TEST_HEADER include/spdk/conf.h 00:17:09.717 CC test/rpc_client/rpc_client_test.o 00:17:09.717 TEST_HEADER include/spdk/crc16.h 00:17:09.717 TEST_HEADER include/spdk/crc32.h 00:17:09.717 TEST_HEADER include/spdk/dif.h 00:17:09.717 TEST_HEADER include/spdk/endian.h 00:17:09.717 TEST_HEADER include/spdk/dma.h 00:17:09.717 TEST_HEADER include/spdk/env_dpdk.h 00:17:09.717 TEST_HEADER include/spdk/event.h 00:17:09.717 TEST_HEADER include/spdk/env.h 00:17:09.717 TEST_HEADER include/spdk/fd_group.h 00:17:09.717 TEST_HEADER include/spdk/blobfs.h 00:17:09.717 CC app/spdk_nvme_identify/identify.o 00:17:09.717 TEST_HEADER include/spdk/cpuset.h 00:17:09.717 TEST_HEADER include/spdk/fd.h 00:17:09.717 TEST_HEADER include/spdk/ftl.h 00:17:09.717 TEST_HEADER include/spdk/file.h 00:17:09.717 TEST_HEADER include/spdk/crc64.h 00:17:09.717 TEST_HEADER include/spdk/hexlify.h 00:17:09.717 TEST_HEADER include/spdk/histogram_data.h 00:17:09.717 CC app/trace_record/trace_record.o 00:17:09.717 TEST_HEADER include/spdk/idxd.h 00:17:09.717 CC app/spdk_top/spdk_top.o 00:17:09.717 TEST_HEADER include/spdk/idxd_spec.h 00:17:09.717 TEST_HEADER include/spdk/init.h 00:17:09.717 TEST_HEADER include/spdk/ioat.h 00:17:09.717 CXX app/trace/trace.o 00:17:09.717 TEST_HEADER include/spdk/iscsi_spec.h 00:17:09.717 TEST_HEADER include/spdk/json.h 00:17:09.717 TEST_HEADER include/spdk/keyring.h 00:17:09.717 CC app/spdk_nvme_discover/discovery_aer.o 00:17:09.717 TEST_HEADER include/spdk/jsonrpc.h 00:17:09.717 TEST_HEADER include/spdk/ioat_spec.h 00:17:09.717 TEST_HEADER include/spdk/gpt_spec.h 00:17:09.717 CC app/vhost/vhost.o 00:17:09.717 TEST_HEADER include/spdk/likely.h 00:17:09.717 TEST_HEADER include/spdk/log.h 00:17:09.717 TEST_HEADER include/spdk/lvol.h 00:17:09.717 TEST_HEADER include/spdk/memory.h 00:17:09.717 TEST_HEADER include/spdk/mmio.h 00:17:09.717 TEST_HEADER include/spdk/keyring_module.h 00:17:09.717 TEST_HEADER include/spdk/nbd.h 00:17:09.717 CC examples/interrupt_tgt/interrupt_tgt.o 00:17:09.717 TEST_HEADER include/spdk/notify.h 00:17:09.717 TEST_HEADER include/spdk/nvme.h 00:17:09.717 TEST_HEADER include/spdk/nvme_intel.h 00:17:09.717 CC app/iscsi_tgt/iscsi_tgt.o 00:17:09.717 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:17:09.717 TEST_HEADER include/spdk/nvme_spec.h 00:17:09.717 TEST_HEADER include/spdk/nvme_zns.h 00:17:09.717 TEST_HEADER include/spdk/nvme_ocssd.h 00:17:09.717 TEST_HEADER include/spdk/nvmf_cmd.h 00:17:09.717 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:17:09.717 CC app/spdk_nvme_perf/perf.o 00:17:09.717 TEST_HEADER include/spdk/nvmf_spec.h 00:17:09.717 TEST_HEADER include/spdk/opal.h 00:17:09.717 TEST_HEADER include/spdk/nvmf_transport.h 00:17:09.717 TEST_HEADER include/spdk/opal_spec.h 00:17:09.717 TEST_HEADER include/spdk/pci_ids.h 00:17:09.717 TEST_HEADER include/spdk/pipe.h 00:17:09.717 TEST_HEADER include/spdk/reduce.h 00:17:09.717 TEST_HEADER include/spdk/rpc.h 00:17:09.717 TEST_HEADER include/spdk/nvmf.h 00:17:09.717 TEST_HEADER include/spdk/scheduler.h 00:17:09.717 CC app/nvmf_tgt/nvmf_main.o 00:17:09.717 TEST_HEADER include/spdk/scsi.h 00:17:09.717 TEST_HEADER include/spdk/scsi_spec.h 00:17:09.717 TEST_HEADER include/spdk/sock.h 00:17:09.717 TEST_HEADER include/spdk/string.h 00:17:09.717 TEST_HEADER include/spdk/thread.h 00:17:09.717 TEST_HEADER include/spdk/trace.h 00:17:09.717 TEST_HEADER include/spdk/tree.h 00:17:09.717 TEST_HEADER include/spdk/trace_parser.h 00:17:09.717 TEST_HEADER include/spdk/ublk.h 00:17:09.717 TEST_HEADER include/spdk/util.h 00:17:09.717 TEST_HEADER include/spdk/uuid.h 00:17:09.717 TEST_HEADER include/spdk/version.h 00:17:09.717 TEST_HEADER include/spdk/vfio_user_pci.h 00:17:09.717 TEST_HEADER include/spdk/queue.h 00:17:09.717 TEST_HEADER include/spdk/stdinc.h 00:17:09.717 TEST_HEADER include/spdk/vfio_user_spec.h 00:17:09.717 TEST_HEADER include/spdk/vhost.h 00:17:09.717 TEST_HEADER include/spdk/xor.h 00:17:09.717 TEST_HEADER include/spdk/vmd.h 00:17:09.717 TEST_HEADER include/spdk/zipf.h 00:17:09.717 CXX test/cpp_headers/accel.o 00:17:09.717 CXX test/cpp_headers/accel_module.o 00:17:09.717 CXX test/cpp_headers/assert.o 00:17:09.717 CXX test/cpp_headers/barrier.o 00:17:09.717 CXX test/cpp_headers/bdev.o 00:17:09.717 CXX test/cpp_headers/bdev_module.o 00:17:09.717 CXX test/cpp_headers/bdev_zone.o 00:17:09.717 CXX test/cpp_headers/bit_pool.o 00:17:09.717 CXX test/cpp_headers/bit_array.o 00:17:09.717 CC app/spdk_dd/spdk_dd.o 00:17:09.717 CXX test/cpp_headers/blob_bdev.o 00:17:09.717 CC app/spdk_tgt/spdk_tgt.o 00:17:09.717 CXX test/cpp_headers/blobfs_bdev.o 00:17:09.717 CXX test/cpp_headers/blob.o 00:17:09.717 CXX test/cpp_headers/base64.o 00:17:09.717 CXX test/cpp_headers/config.o 00:17:09.717 CXX test/cpp_headers/cpuset.o 00:17:09.717 CXX test/cpp_headers/blobfs.o 00:17:09.717 CXX test/cpp_headers/crc32.o 00:17:09.717 CXX test/cpp_headers/dif.o 00:17:09.717 CXX test/cpp_headers/dma.o 00:17:09.717 CXX test/cpp_headers/endian.o 00:17:09.717 CXX test/cpp_headers/conf.o 00:17:09.717 CXX test/cpp_headers/env_dpdk.o 00:17:09.717 CXX test/cpp_headers/env.o 00:17:09.717 CXX test/cpp_headers/crc64.o 00:17:09.717 CXX test/cpp_headers/crc16.o 00:17:09.717 CXX test/cpp_headers/event.o 00:17:09.717 CXX test/cpp_headers/fd_group.o 00:17:09.717 CXX test/cpp_headers/file.o 00:17:09.717 CXX test/cpp_headers/gpt_spec.o 00:17:09.717 CXX test/cpp_headers/ftl.o 00:17:09.717 CXX test/cpp_headers/histogram_data.o 00:17:09.717 CC examples/idxd/perf/perf.o 00:17:09.717 CXX test/cpp_headers/idxd.o 00:17:09.717 CXX test/cpp_headers/idxd_spec.o 00:17:09.717 CXX test/cpp_headers/init.o 00:17:09.717 CXX test/cpp_headers/ioat.o 00:17:09.717 CXX test/cpp_headers/ioat_spec.o 00:17:09.717 CXX test/cpp_headers/iscsi_spec.o 00:17:09.717 CXX test/cpp_headers/json.o 00:17:09.717 CXX test/cpp_headers/jsonrpc.o 00:17:09.717 CXX test/cpp_headers/fd.o 00:17:09.717 CXX test/cpp_headers/keyring.o 00:17:09.717 CXX test/cpp_headers/keyring_module.o 00:17:09.717 CXX test/cpp_headers/likely.o 00:17:09.717 CXX test/cpp_headers/log.o 00:17:09.717 CC examples/nvme/reconnect/reconnect.o 00:17:09.717 CXX test/cpp_headers/hexlify.o 00:17:09.717 CXX test/cpp_headers/memory.o 00:17:09.717 CXX test/cpp_headers/nbd.o 00:17:09.717 CXX test/cpp_headers/nvme.o 00:17:09.717 CXX test/cpp_headers/nvme_intel.o 00:17:09.717 CXX test/cpp_headers/lvol.o 00:17:09.717 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:09.717 CXX test/cpp_headers/mmio.o 00:17:09.717 CXX test/cpp_headers/notify.o 00:17:09.717 CXX test/cpp_headers/nvme_ocssd.o 00:17:09.717 CXX test/cpp_headers/nvmf.o 00:17:09.717 CXX test/cpp_headers/nvmf_spec.o 00:17:09.717 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:17:09.717 CXX test/cpp_headers/nvme_spec.o 00:17:09.717 CXX test/cpp_headers/opal.o 00:17:09.717 CXX test/cpp_headers/opal_spec.o 00:17:09.717 CXX test/cpp_headers/nvme_zns.o 00:17:09.717 CXX test/cpp_headers/pipe.o 00:17:09.717 CXX test/cpp_headers/nvmf_cmd.o 00:17:09.717 CXX test/cpp_headers/reduce.o 00:17:09.717 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:09.717 CC test/event/event_perf/event_perf.o 00:17:09.717 CXX test/cpp_headers/queue.o 00:17:09.717 CXX test/cpp_headers/nvmf_transport.o 00:17:09.717 CC examples/util/zipf/zipf.o 00:17:09.717 CC examples/sock/hello_world/hello_sock.o 00:17:09.717 CXX test/cpp_headers/pci_ids.o 00:17:09.717 CXX test/cpp_headers/rpc.o 00:17:09.717 CXX test/cpp_headers/scheduler.o 00:17:09.979 CXX test/cpp_headers/scsi.o 00:17:09.979 CC test/event/app_repeat/app_repeat.o 00:17:09.980 CC examples/accel/perf/accel_perf.o 00:17:09.980 CC test/nvme/fused_ordering/fused_ordering.o 00:17:09.980 CC examples/nvme/abort/abort.o 00:17:09.980 CC test/app/stub/stub.o 00:17:09.980 CC test/dma/test_dma/test_dma.o 00:17:09.980 CC examples/blob/hello_world/hello_blob.o 00:17:09.980 CC examples/nvme/arbitration/arbitration.o 00:17:09.980 CC test/app/jsoncat/jsoncat.o 00:17:09.980 CC test/nvme/boot_partition/boot_partition.o 00:17:09.980 CC test/bdev/bdevio/bdevio.o 00:17:09.980 CC app/fio/bdev/fio_plugin.o 00:17:09.980 CC test/event/scheduler/scheduler.o 00:17:09.980 LINK rpc_client_test 00:17:09.980 CC test/nvme/err_injection/err_injection.o 00:17:09.980 CC test/event/reactor/reactor.o 00:17:09.980 CC examples/ioat/perf/perf.o 00:17:09.980 CC test/accel/dif/dif.o 00:17:09.980 CC examples/nvmf/nvmf/nvmf.o 00:17:09.980 CC test/env/vtophys/vtophys.o 00:17:09.980 CC test/nvme/reserve/reserve.o 00:17:09.980 CC examples/vmd/lsvmd/lsvmd.o 00:17:09.980 CC test/nvme/fdp/fdp.o 00:17:09.980 CC test/nvme/cuse/cuse.o 00:17:09.980 CC test/event/reactor_perf/reactor_perf.o 00:17:10.241 CC test/nvme/e2edp/nvme_dp.o 00:17:10.241 CC test/app/histogram_perf/histogram_perf.o 00:17:10.241 LINK vhost 00:17:10.241 CC examples/nvme/hotplug/hotplug.o 00:17:10.241 CC test/nvme/overhead/overhead.o 00:17:10.241 CC examples/blob/cli/blobcli.o 00:17:10.241 CC test/nvme/reset/reset.o 00:17:10.241 CC test/nvme/startup/startup.o 00:17:10.241 CC examples/nvme/hello_world/hello_world.o 00:17:10.241 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:10.241 CC test/nvme/compliance/nvme_compliance.o 00:17:10.241 LINK spdk_nvme_discover 00:17:10.241 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:10.241 CC examples/ioat/verify/verify.o 00:17:10.241 CC test/env/pci/pci_ut.o 00:17:10.241 CC test/nvme/connect_stress/connect_stress.o 00:17:10.241 CC examples/thread/thread/thread_ex.o 00:17:10.241 CXX test/cpp_headers/scsi_spec.o 00:17:10.241 LINK interrupt_tgt 00:17:10.241 CXX test/cpp_headers/sock.o 00:17:10.241 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:10.241 CC app/fio/nvme/fio_plugin.o 00:17:10.241 CC test/nvme/sgl/sgl.o 00:17:10.241 CC test/env/memory/memory_ut.o 00:17:10.241 CXX test/cpp_headers/stdinc.o 00:17:10.241 CXX test/cpp_headers/string.o 00:17:10.241 LINK event_perf 00:17:10.241 CXX test/cpp_headers/trace.o 00:17:10.241 CXX test/cpp_headers/thread.o 00:17:10.241 CC test/nvme/doorbell_aers/doorbell_aers.o 00:17:10.241 CXX test/cpp_headers/tree.o 00:17:10.241 CC test/nvme/simple_copy/simple_copy.o 00:17:10.241 LINK jsoncat 00:17:10.241 CXX test/cpp_headers/trace_parser.o 00:17:10.241 CXX test/cpp_headers/ublk.o 00:17:10.241 CC examples/bdev/hello_world/hello_bdev.o 00:17:10.241 CC test/thread/poller_perf/poller_perf.o 00:17:10.241 CXX test/cpp_headers/util.o 00:17:10.241 CC examples/bdev/bdevperf/bdevperf.o 00:17:10.241 CC examples/vmd/led/led.o 00:17:10.241 CXX test/cpp_headers/vfio_user_pci.o 00:17:10.241 CXX test/cpp_headers/uuid.o 00:17:10.241 CXX test/cpp_headers/vfio_user_spec.o 00:17:10.241 CXX test/cpp_headers/vhost.o 00:17:10.499 CXX test/cpp_headers/version.o 00:17:10.499 CXX test/cpp_headers/zipf.o 00:17:10.499 LINK stub 00:17:10.499 CXX test/cpp_headers/vmd.o 00:17:10.499 CXX test/cpp_headers/xor.o 00:17:10.499 LINK boot_partition 00:17:10.499 CC test/nvme/aer/aer.o 00:17:10.499 CC test/app/bdev_svc/bdev_svc.o 00:17:10.499 LINK hello_sock 00:17:10.499 CC test/env/mem_callbacks/mem_callbacks.o 00:17:10.499 CC test/blobfs/mkfs/mkfs.o 00:17:10.499 LINK vtophys 00:17:10.499 LINK err_injection 00:17:10.499 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:17:10.499 LINK hello_blob 00:17:10.499 CC test/lvol/esnap/esnap.o 00:17:10.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:17:10.499 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:17:10.499 LINK scheduler 00:17:10.499 LINK startup 00:17:10.499 LINK idxd_perf 00:17:10.499 LINK env_dpdk_post_init 00:17:10.499 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:17:10.499 LINK spdk_trace 00:17:10.499 LINK reconnect 00:17:10.499 LINK hello_world 00:17:10.499 LINK overhead 00:17:10.499 LINK spdk_dd 00:17:10.499 LINK connect_stress 00:17:10.499 LINK led 00:17:10.757 LINK poller_perf 00:17:10.757 LINK test_dma 00:17:10.757 LINK accel_perf 00:17:10.757 LINK spdk_lspci 00:17:10.757 LINK iscsi_tgt 00:17:10.757 LINK thread 00:17:10.757 LINK doorbell_aers 00:17:10.757 LINK verify 00:17:10.757 LINK bdev_svc 00:17:10.757 LINK lsvmd 00:17:10.757 LINK histogram_perf 00:17:10.757 LINK sgl 00:17:10.757 LINK hello_bdev 00:17:10.757 LINK nvmf_tgt 00:17:10.757 LINK pmr_persistence 00:17:10.757 LINK reactor 00:17:10.757 LINK spdk_top 00:17:10.757 LINK aer 00:17:10.757 LINK app_repeat 00:17:11.017 LINK zipf 00:17:11.017 LINK reserve 00:17:11.017 LINK spdk_nvme_perf 00:17:11.017 LINK reactor_perf 00:17:11.017 LINK blobcli 00:17:11.017 LINK spdk_tgt 00:17:11.017 LINK cmb_copy 00:17:11.017 LINK spdk_trace_record 00:17:11.017 LINK spdk_nvme_identify 00:17:11.017 LINK vhost_fuzz 00:17:11.017 LINK nvme_fuzz 00:17:11.017 LINK fused_ordering 00:17:11.017 LINK ioat_perf 00:17:11.017 LINK simple_copy 00:17:11.017 LINK nvme_dp 00:17:11.017 LINK mkfs 00:17:11.017 LINK hotplug 00:17:11.017 LINK reset 00:17:11.017 LINK nvme_compliance 00:17:11.017 LINK arbitration 00:17:11.017 LINK mem_callbacks 00:17:11.017 LINK fdp 00:17:11.017 LINK pci_ut 00:17:11.017 LINK nvmf 00:17:11.278 LINK abort 00:17:11.278 LINK bdevio 00:17:11.278 LINK spdk_bdev 00:17:11.278 LINK dif 00:17:11.278 LINK bdevperf 00:17:11.278 LINK nvme_manage 00:17:11.278 LINK spdk_nvme 00:17:11.539 LINK cuse 00:17:11.799 LINK memory_ut 00:17:12.060 LINK iscsi_fuzz 00:17:14.604 LINK esnap 00:17:15.177 00:17:15.177 real 0m50.671s 00:17:15.177 user 6m41.139s 00:17:15.177 sys 4m43.014s 00:17:15.177 11:25:43 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:17:15.177 11:25:43 make -- common/autotest_common.sh@10 -- $ set +x 00:17:15.177 ************************************ 00:17:15.177 END TEST make 00:17:15.177 ************************************ 00:17:15.177 11:25:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:15.177 11:25:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:15.177 11:25:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:15.177 11:25:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:17:15.177 11:25:43 -- pm/common@44 -- $ pid=1991999 00:17:15.177 11:25:43 -- pm/common@50 -- $ kill -TERM 1991999 00:17:15.177 11:25:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:17:15.177 11:25:43 -- pm/common@44 -- $ pid=1992000 00:17:15.177 11:25:43 -- pm/common@50 -- $ kill -TERM 1992000 00:17:15.177 11:25:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:17:15.177 11:25:43 -- pm/common@44 -- $ pid=1992002 00:17:15.177 11:25:43 -- pm/common@50 -- $ kill -TERM 1992002 00:17:15.177 11:25:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:17:15.177 11:25:43 -- pm/common@44 -- $ pid=1992025 00:17:15.177 11:25:43 -- pm/common@50 -- $ sudo -E kill -TERM 1992025 00:17:15.177 11:25:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.177 11:25:44 -- nvmf/common.sh@7 -- # uname -s 00:17:15.177 11:25:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.177 11:25:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.177 11:25:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.177 11:25:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.177 11:25:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.177 11:25:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.177 11:25:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.177 11:25:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.177 11:25:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.177 11:25:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.177 11:25:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:15.177 11:25:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:15.177 11:25:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.177 11:25:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.177 11:25:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.177 11:25:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.177 11:25:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.177 11:25:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.177 11:25:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.177 11:25:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.177 11:25:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.177 11:25:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.177 11:25:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.177 11:25:44 -- paths/export.sh@5 -- # export PATH 00:17:15.177 11:25:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.177 11:25:44 -- nvmf/common.sh@47 -- # : 0 00:17:15.177 11:25:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.177 11:25:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.177 11:25:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.177 11:25:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.177 11:25:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.177 11:25:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.177 11:25:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.177 11:25:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.177 11:25:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:15.177 11:25:44 -- spdk/autotest.sh@32 -- # uname -s 00:17:15.177 11:25:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:15.177 11:25:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:15.177 11:25:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:17:15.177 11:25:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:17:15.177 11:25:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:17:15.177 11:25:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:15.177 11:25:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:15.177 11:25:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:15.177 11:25:44 -- spdk/autotest.sh@48 -- # udevadm_pid=2054885 00:17:15.177 11:25:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:15.177 11:25:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:15.177 11:25:44 -- pm/common@17 -- # local monitor 00:17:15.177 11:25:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:15.177 11:25:44 -- pm/common@21 -- # date +%s 00:17:15.177 11:25:44 -- pm/common@21 -- # date +%s 00:17:15.177 11:25:44 -- pm/common@25 -- # sleep 1 00:17:15.177 11:25:44 -- pm/common@21 -- # date +%s 00:17:15.177 11:25:44 -- pm/common@21 -- # date +%s 00:17:15.177 11:25:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011544 00:17:15.177 11:25:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011544 00:17:15.177 11:25:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011544 00:17:15.177 11:25:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011544 00:17:15.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011544_collect-vmstat.pm.log 00:17:15.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011544_collect-cpu-load.pm.log 00:17:15.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011544_collect-cpu-temp.pm.log 00:17:15.439 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011544_collect-bmc-pm.bmc.pm.log 00:17:16.381 11:25:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:16.381 11:25:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:16.381 11:25:45 -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:16.381 11:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:16.381 11:25:45 -- spdk/autotest.sh@59 -- # create_test_list 00:17:16.381 11:25:45 -- common/autotest_common.sh@747 -- # xtrace_disable 00:17:16.381 11:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:16.381 11:25:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:17:16.381 11:25:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:16.381 11:25:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:16.381 11:25:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:17:16.381 11:25:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:17:16.381 11:25:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:16.381 11:25:45 -- common/autotest_common.sh@1454 -- # uname 00:17:16.381 11:25:45 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:17:16.381 11:25:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:16.381 11:25:45 -- common/autotest_common.sh@1474 -- # uname 00:17:16.381 11:25:45 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:17:16.381 11:25:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:17:16.381 11:25:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:17:16.381 11:25:45 -- spdk/autotest.sh@72 -- # hash lcov 00:17:16.381 11:25:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:17:16.381 11:25:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:17:16.381 --rc lcov_branch_coverage=1 00:17:16.381 --rc lcov_function_coverage=1 00:17:16.381 --rc genhtml_branch_coverage=1 00:17:16.381 --rc genhtml_function_coverage=1 00:17:16.381 --rc genhtml_legend=1 00:17:16.381 --rc geninfo_all_blocks=1 00:17:16.381 ' 00:17:16.381 11:25:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:17:16.381 --rc lcov_branch_coverage=1 00:17:16.381 --rc lcov_function_coverage=1 00:17:16.381 --rc genhtml_branch_coverage=1 00:17:16.381 --rc genhtml_function_coverage=1 00:17:16.381 --rc genhtml_legend=1 00:17:16.381 --rc geninfo_all_blocks=1 00:17:16.381 ' 00:17:16.381 11:25:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:17:16.381 --rc lcov_branch_coverage=1 00:17:16.381 --rc lcov_function_coverage=1 00:17:16.381 --rc genhtml_branch_coverage=1 00:17:16.381 --rc genhtml_function_coverage=1 00:17:16.381 --rc genhtml_legend=1 00:17:16.381 --rc geninfo_all_blocks=1 00:17:16.381 --no-external' 00:17:16.381 11:25:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:17:16.381 --rc lcov_branch_coverage=1 00:17:16.381 --rc lcov_function_coverage=1 00:17:16.381 --rc genhtml_branch_coverage=1 00:17:16.381 --rc genhtml_function_coverage=1 00:17:16.381 --rc genhtml_legend=1 00:17:16.381 --rc geninfo_all_blocks=1 00:17:16.381 --no-external' 00:17:16.381 11:25:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:17:16.381 lcov: LCOV version 1.14 00:17:16.381 11:25:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:17:28.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:17:28.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:17:43.523 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:17:43.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:17:43.524 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:17:43.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:17:43.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:17:43.525 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:17:43.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:17:43.786 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:17:45.696 11:26:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:17:45.696 11:26:14 -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:45.696 11:26:14 -- common/autotest_common.sh@10 -- # set +x 00:17:45.696 11:26:14 -- spdk/autotest.sh@91 -- # rm -f 00:17:45.696 11:26:14 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:17:48.311 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:17:48.311 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:17:48.311 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:17:48.311 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:17:48.311 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:17:48.311 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:17:48.312 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:65:00.0 (144d a80a): Already using the nvme driver 00:17:48.573 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:17:48.573 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:17:48.573 11:26:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:17:48.573 11:26:17 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:17:48.573 11:26:17 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:17:48.573 11:26:17 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:17:48.573 11:26:17 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:17:48.573 11:26:17 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:17:48.573 11:26:17 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:17:48.573 11:26:17 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:48.573 11:26:17 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:48.573 11:26:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:17:48.573 11:26:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:17:48.573 11:26:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:17:48.573 11:26:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:17:48.573 11:26:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:17:48.573 11:26:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:48.833 No valid GPT data, bailing 00:17:48.833 11:26:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:48.833 11:26:17 -- scripts/common.sh@391 -- # pt= 00:17:48.833 11:26:17 -- scripts/common.sh@392 -- # return 1 00:17:48.833 11:26:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:48.833 1+0 records in 00:17:48.833 1+0 records out 00:17:48.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426023 s, 246 MB/s 00:17:48.833 11:26:17 -- spdk/autotest.sh@118 -- # sync 00:17:48.833 11:26:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:48.833 11:26:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:48.833 11:26:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:57.028 11:26:25 -- spdk/autotest.sh@124 -- # uname -s 00:17:57.028 11:26:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:17:57.028 11:26:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:17:57.028 11:26:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:57.028 11:26:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:57.028 11:26:25 -- common/autotest_common.sh@10 -- # set +x 00:17:57.028 ************************************ 00:17:57.028 START TEST setup.sh 00:17:57.028 ************************************ 00:17:57.028 11:26:25 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:17:57.028 * Looking for test storage... 00:17:57.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:17:57.028 11:26:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:17:57.028 11:26:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:17:57.028 11:26:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:17:57.028 11:26:25 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:17:57.028 11:26:25 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:57.028 11:26:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:17:57.028 ************************************ 00:17:57.028 START TEST acl 00:17:57.028 ************************************ 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:17:57.028 * Looking for test storage... 00:17:57.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:57.028 11:26:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:17:57.028 11:26:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:17:57.028 11:26:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:17:57.028 11:26:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:00.350 11:26:28 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:18:00.350 11:26:28 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:18:00.350 11:26:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:00.350 11:26:28 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:18:00.350 11:26:28 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:18:00.350 11:26:28 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:18:03.649 Hugepages 00:18:03.649 node hugesize free / total 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 00:18:03.649 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.649 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:18:03.650 11:26:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:18:03.650 11:26:32 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:03.650 11:26:32 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:03.650 11:26:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:18:03.650 ************************************ 00:18:03.650 START TEST denied 00:18:03.650 ************************************ 00:18:03.650 11:26:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:18:03.650 11:26:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:18:03.650 11:26:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:18:03.650 11:26:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:18:03.650 11:26:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:18:03.650 11:26:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:18:07.858 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:18:07.858 11:26:35 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:18:07.858 11:26:35 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:18:07.858 11:26:35 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:18:07.858 11:26:35 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:18:07.858 11:26:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:18:07.858 11:26:36 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:18:07.858 11:26:36 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:18:07.858 11:26:36 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:18:07.858 11:26:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:07.858 11:26:36 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:12.068 00:18:12.068 real 0m8.089s 00:18:12.068 user 0m2.641s 00:18:12.068 sys 0m4.764s 00:18:12.068 11:26:40 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:12.068 11:26:40 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:18:12.068 ************************************ 00:18:12.068 END TEST denied 00:18:12.068 ************************************ 00:18:12.068 11:26:40 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:18:12.068 11:26:40 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:12.068 11:26:40 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:12.068 11:26:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:18:12.068 ************************************ 00:18:12.068 START TEST allowed 00:18:12.068 ************************************ 00:18:12.069 11:26:40 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:18:12.069 11:26:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:18:12.069 11:26:40 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:18:12.069 11:26:40 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:18:12.069 11:26:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:18:12.069 11:26:40 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:18:17.357 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:18:17.357 11:26:46 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:18:17.357 11:26:46 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:18:17.357 11:26:46 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:18:17.357 11:26:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:17.357 11:26:46 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:21.570 00:18:21.570 real 0m9.142s 00:18:21.570 user 0m2.583s 00:18:21.570 sys 0m4.866s 00:18:21.570 11:26:49 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:21.570 11:26:49 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 ************************************ 00:18:21.570 END TEST allowed 00:18:21.570 ************************************ 00:18:21.570 00:18:21.570 real 0m24.639s 00:18:21.570 user 0m7.948s 00:18:21.570 sys 0m14.482s 00:18:21.570 11:26:49 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:21.570 11:26:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 ************************************ 00:18:21.570 END TEST acl 00:18:21.570 ************************************ 00:18:21.570 11:26:49 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:18:21.570 11:26:49 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:21.570 11:26:49 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:21.570 11:26:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 ************************************ 00:18:21.570 START TEST hugepages 00:18:21.570 ************************************ 00:18:21.570 11:26:49 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:18:21.570 * Looking for test storage... 00:18:21.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 102183284 kB' 'MemAvailable: 106081696 kB' 'Buffers: 3736 kB' 'Cached: 14933244 kB' 'SwapCached: 0 kB' 'Active: 11750860 kB' 'Inactive: 3782212 kB' 'Active(anon): 11243000 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599776 kB' 'Mapped: 224928 kB' 'Shmem: 10646908 kB' 'KReclaimable: 649284 kB' 'Slab: 1525316 kB' 'SReclaimable: 649284 kB' 'SUnreclaim: 876032 kB' 'KernelStack: 27424 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460896 kB' 'Committed_AS: 12757628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.570 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.571 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:18:21.572 11:26:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:18:21.572 11:26:50 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:21.572 11:26:50 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:21.572 11:26:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:21.572 ************************************ 00:18:21.572 START TEST default_setup 00:18:21.572 ************************************ 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:18:21.572 11:26:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:24.878 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:18:24.878 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104358664 kB' 'MemAvailable: 108257044 kB' 'Buffers: 3736 kB' 'Cached: 14933352 kB' 'SwapCached: 0 kB' 'Active: 11767676 kB' 'Inactive: 3782212 kB' 'Active(anon): 11259816 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616148 kB' 'Mapped: 225248 kB' 'Shmem: 10647016 kB' 'KReclaimable: 649252 kB' 'Slab: 1522612 kB' 'SReclaimable: 649252 kB' 'SUnreclaim: 873360 kB' 'KernelStack: 27664 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12782180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235940 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.878 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.879 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104361784 kB' 'MemAvailable: 108260164 kB' 'Buffers: 3736 kB' 'Cached: 14933356 kB' 'SwapCached: 0 kB' 'Active: 11767728 kB' 'Inactive: 3782212 kB' 'Active(anon): 11259868 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616224 kB' 'Mapped: 225260 kB' 'Shmem: 10647020 kB' 'KReclaimable: 649252 kB' 'Slab: 1522612 kB' 'SReclaimable: 649252 kB' 'SUnreclaim: 873360 kB' 'KernelStack: 27648 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12780592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235892 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.880 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.881 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104362136 kB' 'MemAvailable: 108260516 kB' 'Buffers: 3736 kB' 'Cached: 14933372 kB' 'SwapCached: 0 kB' 'Active: 11767904 kB' 'Inactive: 3782212 kB' 'Active(anon): 11260044 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616372 kB' 'Mapped: 225312 kB' 'Shmem: 10647036 kB' 'KReclaimable: 649252 kB' 'Slab: 1522596 kB' 'SReclaimable: 649252 kB' 'SUnreclaim: 873344 kB' 'KernelStack: 27824 kB' 'PageTables: 10056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12798288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235940 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.882 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.883 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:24.884 nr_hugepages=1024 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:24.884 resv_hugepages=0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:24.884 surplus_hugepages=0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:24.884 anon_hugepages=0 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104360240 kB' 'MemAvailable: 108258620 kB' 'Buffers: 3736 kB' 'Cached: 14933392 kB' 'SwapCached: 0 kB' 'Active: 11767360 kB' 'Inactive: 3782212 kB' 'Active(anon): 11259500 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616316 kB' 'Mapped: 225320 kB' 'Shmem: 10647056 kB' 'KReclaimable: 649252 kB' 'Slab: 1522596 kB' 'SReclaimable: 649252 kB' 'SUnreclaim: 873344 kB' 'KernelStack: 27760 kB' 'PageTables: 9728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12778656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235876 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.884 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:18:24.885 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55869744 kB' 'MemUsed: 9789280 kB' 'SwapCached: 0 kB' 'Active: 5231608 kB' 'Inactive: 271816 kB' 'Active(anon): 4988416 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184204 kB' 'Mapped: 67520 kB' 'AnonPages: 322396 kB' 'Shmem: 4669196 kB' 'KernelStack: 14504 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330764 kB' 'Slab: 826316 kB' 'SReclaimable: 330764 kB' 'SUnreclaim: 495552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.886 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:24.887 node0=1024 expecting 1024 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:24.887 00:18:24.887 real 0m3.391s 00:18:24.887 user 0m1.243s 00:18:24.887 sys 0m2.079s 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:24.887 11:26:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:18:24.887 ************************************ 00:18:24.887 END TEST default_setup 00:18:24.887 ************************************ 00:18:24.887 11:26:53 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:18:24.887 11:26:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:24.887 11:26:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:24.887 11:26:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:24.887 ************************************ 00:18:24.887 START TEST per_node_1G_alloc 00:18:24.887 ************************************ 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:18:24.887 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:24.888 11:26:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:28.191 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:28.191 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104364264 kB' 'MemAvailable: 108262612 kB' 'Buffers: 3736 kB' 'Cached: 14933512 kB' 'SwapCached: 0 kB' 'Active: 11765216 kB' 'Inactive: 3782212 kB' 'Active(anon): 11257356 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613540 kB' 'Mapped: 224436 kB' 'Shmem: 10647176 kB' 'KReclaimable: 649220 kB' 'Slab: 1522872 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873652 kB' 'KernelStack: 27456 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12762808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.191 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.192 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.193 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104364016 kB' 'MemAvailable: 108262364 kB' 'Buffers: 3736 kB' 'Cached: 14933512 kB' 'SwapCached: 0 kB' 'Active: 11764876 kB' 'Inactive: 3782212 kB' 'Active(anon): 11257016 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 613232 kB' 'Mapped: 224144 kB' 'Shmem: 10647176 kB' 'KReclaimable: 649220 kB' 'Slab: 1522872 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873652 kB' 'KernelStack: 27424 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12762832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.194 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.195 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:28.196 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104364392 kB' 'MemAvailable: 108262740 kB' 'Buffers: 3736 kB' 'Cached: 14933532 kB' 'SwapCached: 0 kB' 'Active: 11764316 kB' 'Inactive: 3782212 kB' 'Active(anon): 11256456 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612644 kB' 'Mapped: 224140 kB' 'Shmem: 10647196 kB' 'KReclaimable: 649220 kB' 'Slab: 1522952 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873732 kB' 'KernelStack: 27360 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12762856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.197 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.468 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:28.469 nr_hugepages=1024 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:28.469 resv_hugepages=0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:28.469 surplus_hugepages=0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:28.469 anon_hugepages=0 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104365520 kB' 'MemAvailable: 108263868 kB' 'Buffers: 3736 kB' 'Cached: 14933556 kB' 'SwapCached: 0 kB' 'Active: 11764472 kB' 'Inactive: 3782212 kB' 'Active(anon): 11256612 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612796 kB' 'Mapped: 224140 kB' 'Shmem: 10647220 kB' 'KReclaimable: 649220 kB' 'Slab: 1522952 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873732 kB' 'KernelStack: 27376 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12763016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.469 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.470 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 56919820 kB' 'MemUsed: 8739204 kB' 'SwapCached: 0 kB' 'Active: 5231200 kB' 'Inactive: 271816 kB' 'Active(anon): 4988008 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184328 kB' 'Mapped: 66948 kB' 'AnonPages: 321968 kB' 'Shmem: 4669320 kB' 'KernelStack: 14504 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330732 kB' 'Slab: 826648 kB' 'SReclaimable: 330732 kB' 'SUnreclaim: 495916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.471 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.472 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679864 kB' 'MemFree: 47445196 kB' 'MemUsed: 13234668 kB' 'SwapCached: 0 kB' 'Active: 6533272 kB' 'Inactive: 3510396 kB' 'Active(anon): 6268604 kB' 'Inactive(anon): 0 kB' 'Active(file): 264668 kB' 'Inactive(file): 3510396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9752964 kB' 'Mapped: 157192 kB' 'AnonPages: 290828 kB' 'Shmem: 5977900 kB' 'KernelStack: 12872 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 318488 kB' 'Slab: 696304 kB' 'SReclaimable: 318488 kB' 'SUnreclaim: 377816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.473 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:18:28.474 node0=512 expecting 512 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:18:28.474 node1=512 expecting 512 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:18:28.474 00:18:28.474 real 0m3.613s 00:18:28.474 user 0m1.391s 00:18:28.474 sys 0m2.286s 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:28.474 11:26:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:18:28.474 ************************************ 00:18:28.474 END TEST per_node_1G_alloc 00:18:28.474 ************************************ 00:18:28.474 11:26:57 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:18:28.474 11:26:57 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:28.474 11:26:57 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:28.474 11:26:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:28.474 ************************************ 00:18:28.474 START TEST even_2G_alloc 00:18:28.474 ************************************ 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:28.474 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:28.475 11:26:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:31.871 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:31.871 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104329848 kB' 'MemAvailable: 108228196 kB' 'Buffers: 3736 kB' 'Cached: 14933688 kB' 'SwapCached: 0 kB' 'Active: 11768508 kB' 'Inactive: 3782212 kB' 'Active(anon): 11260648 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616848 kB' 'Mapped: 225104 kB' 'Shmem: 10647352 kB' 'KReclaimable: 649220 kB' 'Slab: 1522712 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873492 kB' 'KernelStack: 27536 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12770136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.871 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.872 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104324124 kB' 'MemAvailable: 108222472 kB' 'Buffers: 3736 kB' 'Cached: 14933692 kB' 'SwapCached: 0 kB' 'Active: 11772116 kB' 'Inactive: 3782212 kB' 'Active(anon): 11264256 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620252 kB' 'Mapped: 224668 kB' 'Shmem: 10647356 kB' 'KReclaimable: 649220 kB' 'Slab: 1522756 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873536 kB' 'KernelStack: 27520 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12774656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.873 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.874 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104323876 kB' 'MemAvailable: 108222224 kB' 'Buffers: 3736 kB' 'Cached: 14933708 kB' 'SwapCached: 0 kB' 'Active: 11772692 kB' 'Inactive: 3782212 kB' 'Active(anon): 11264832 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620896 kB' 'Mapped: 224944 kB' 'Shmem: 10647372 kB' 'KReclaimable: 649220 kB' 'Slab: 1522756 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873536 kB' 'KernelStack: 27520 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12774676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235736 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.875 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.876 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:31.877 nr_hugepages=1024 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:31.877 resv_hugepages=0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:31.877 surplus_hugepages=0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:31.877 anon_hugepages=0 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104323460 kB' 'MemAvailable: 108221808 kB' 'Buffers: 3736 kB' 'Cached: 14933732 kB' 'SwapCached: 0 kB' 'Active: 11767176 kB' 'Inactive: 3782212 kB' 'Active(anon): 11259316 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 615356 kB' 'Mapped: 224440 kB' 'Shmem: 10647396 kB' 'KReclaimable: 649220 kB' 'Slab: 1522756 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873536 kB' 'KernelStack: 27520 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12768216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235748 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.877 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.878 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 56900968 kB' 'MemUsed: 8758056 kB' 'SwapCached: 0 kB' 'Active: 5231056 kB' 'Inactive: 271816 kB' 'Active(anon): 4987864 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184492 kB' 'Mapped: 67476 kB' 'AnonPages: 321596 kB' 'Shmem: 4669484 kB' 'KernelStack: 14536 kB' 'PageTables: 5032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330732 kB' 'Slab: 826580 kB' 'SReclaimable: 330732 kB' 'SUnreclaim: 495848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.879 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679864 kB' 'MemFree: 47412324 kB' 'MemUsed: 13267540 kB' 'SwapCached: 0 kB' 'Active: 6542272 kB' 'Inactive: 3510396 kB' 'Active(anon): 6277604 kB' 'Inactive(anon): 0 kB' 'Active(file): 264668 kB' 'Inactive(file): 3510396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9752996 kB' 'Mapped: 157192 kB' 'AnonPages: 299380 kB' 'Shmem: 5977932 kB' 'KernelStack: 13000 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 318488 kB' 'Slab: 696176 kB' 'SReclaimable: 318488 kB' 'SUnreclaim: 377688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.880 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.881 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:18:31.882 node0=512 expecting 512 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:18:31.882 node1=512 expecting 512 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:18:31.882 00:18:31.882 real 0m3.358s 00:18:31.882 user 0m1.270s 00:18:31.882 sys 0m2.099s 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:31.882 11:27:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:18:31.882 ************************************ 00:18:31.882 END TEST even_2G_alloc 00:18:31.882 ************************************ 00:18:31.882 11:27:00 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:18:31.882 11:27:00 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:31.882 11:27:00 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:31.882 11:27:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:31.882 ************************************ 00:18:31.882 START TEST odd_alloc 00:18:31.882 ************************************ 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:31.882 11:27:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:35.186 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:35.186 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:35.186 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104282396 kB' 'MemAvailable: 108180744 kB' 'Buffers: 3736 kB' 'Cached: 14933868 kB' 'SwapCached: 0 kB' 'Active: 11774300 kB' 'Inactive: 3782212 kB' 'Active(anon): 11266440 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621764 kB' 'Mapped: 225184 kB' 'Shmem: 10647532 kB' 'KReclaimable: 649220 kB' 'Slab: 1522916 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873696 kB' 'KernelStack: 27520 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 12774308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235864 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.453 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104285580 kB' 'MemAvailable: 108183928 kB' 'Buffers: 3736 kB' 'Cached: 14933872 kB' 'SwapCached: 0 kB' 'Active: 11774180 kB' 'Inactive: 3782212 kB' 'Active(anon): 11266320 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621620 kB' 'Mapped: 225184 kB' 'Shmem: 10647536 kB' 'KReclaimable: 649220 kB' 'Slab: 1522916 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873696 kB' 'KernelStack: 27504 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 12774328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.454 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.455 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104287160 kB' 'MemAvailable: 108185508 kB' 'Buffers: 3736 kB' 'Cached: 14933888 kB' 'SwapCached: 0 kB' 'Active: 11773520 kB' 'Inactive: 3782212 kB' 'Active(anon): 11265660 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621400 kB' 'Mapped: 225104 kB' 'Shmem: 10647552 kB' 'KReclaimable: 649220 kB' 'Slab: 1522892 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873672 kB' 'KernelStack: 27488 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 12774348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.456 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.457 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:18:35.458 nr_hugepages=1025 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:35.458 resv_hugepages=0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:35.458 surplus_hugepages=0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:35.458 anon_hugepages=0 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.458 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104286780 kB' 'MemAvailable: 108185128 kB' 'Buffers: 3736 kB' 'Cached: 14933908 kB' 'SwapCached: 0 kB' 'Active: 11773552 kB' 'Inactive: 3782212 kB' 'Active(anon): 11265692 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621396 kB' 'Mapped: 225104 kB' 'Shmem: 10647572 kB' 'KReclaimable: 649220 kB' 'Slab: 1522892 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873672 kB' 'KernelStack: 27488 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508448 kB' 'Committed_AS: 12774368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235832 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.459 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 56892160 kB' 'MemUsed: 8766864 kB' 'SwapCached: 0 kB' 'Active: 5230976 kB' 'Inactive: 271816 kB' 'Active(anon): 4987784 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184612 kB' 'Mapped: 67912 kB' 'AnonPages: 321420 kB' 'Shmem: 4669604 kB' 'KernelStack: 14520 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330732 kB' 'Slab: 826772 kB' 'SReclaimable: 330732 kB' 'SUnreclaim: 496040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.460 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:18:35.461 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679864 kB' 'MemFree: 47393080 kB' 'MemUsed: 13286784 kB' 'SwapCached: 0 kB' 'Active: 6542484 kB' 'Inactive: 3510396 kB' 'Active(anon): 6277816 kB' 'Inactive(anon): 0 kB' 'Active(file): 264668 kB' 'Inactive(file): 3510396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753052 kB' 'Mapped: 157192 kB' 'AnonPages: 299904 kB' 'Shmem: 5977988 kB' 'KernelStack: 12920 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 318488 kB' 'Slab: 696120 kB' 'SReclaimable: 318488 kB' 'SUnreclaim: 377632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.462 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:18:35.463 node0=512 expecting 513 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:18:35.463 node1=513 expecting 512 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:18:35.463 00:18:35.463 real 0m3.568s 00:18:35.463 user 0m1.409s 00:18:35.463 sys 0m2.225s 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:35.463 11:27:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:18:35.463 ************************************ 00:18:35.463 END TEST odd_alloc 00:18:35.463 ************************************ 00:18:35.463 11:27:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:18:35.463 11:27:04 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:35.463 11:27:04 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:35.463 11:27:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:35.724 ************************************ 00:18:35.724 START TEST custom_alloc 00:18:35.724 ************************************ 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:35.724 11:27:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:39.030 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:39.030 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.030 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 103272972 kB' 'MemAvailable: 107171824 kB' 'Buffers: 3736 kB' 'Cached: 14934040 kB' 'SwapCached: 0 kB' 'Active: 11774540 kB' 'Inactive: 3782212 kB' 'Active(anon): 11266680 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621728 kB' 'Mapped: 225220 kB' 'Shmem: 10647704 kB' 'KReclaimable: 649220 kB' 'Slab: 1523080 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873860 kB' 'KernelStack: 27472 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 12775124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235736 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.031 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 103276012 kB' 'MemAvailable: 107174360 kB' 'Buffers: 3736 kB' 'Cached: 14934040 kB' 'SwapCached: 0 kB' 'Active: 11775068 kB' 'Inactive: 3782212 kB' 'Active(anon): 11267208 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622264 kB' 'Mapped: 225208 kB' 'Shmem: 10647704 kB' 'KReclaimable: 649220 kB' 'Slab: 1523064 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873844 kB' 'KernelStack: 27456 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 12774776 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235672 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.032 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.033 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 103276860 kB' 'MemAvailable: 107175208 kB' 'Buffers: 3736 kB' 'Cached: 14934056 kB' 'SwapCached: 0 kB' 'Active: 11773512 kB' 'Inactive: 3782212 kB' 'Active(anon): 11265652 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621128 kB' 'Mapped: 225132 kB' 'Shmem: 10647720 kB' 'KReclaimable: 649220 kB' 'Slab: 1523040 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873820 kB' 'KernelStack: 27392 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 12774800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235640 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.034 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.035 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:18:39.036 nr_hugepages=1536 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:39.036 resv_hugepages=0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:39.036 surplus_hugepages=0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:39.036 anon_hugepages=0 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.036 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 103277040 kB' 'MemAvailable: 107175388 kB' 'Buffers: 3736 kB' 'Cached: 14934096 kB' 'SwapCached: 0 kB' 'Active: 11773476 kB' 'Inactive: 3782212 kB' 'Active(anon): 11265616 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621052 kB' 'Mapped: 225132 kB' 'Shmem: 10647760 kB' 'KReclaimable: 649220 kB' 'Slab: 1523040 kB' 'SReclaimable: 649220 kB' 'SUnreclaim: 873820 kB' 'KernelStack: 27376 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985184 kB' 'Committed_AS: 12774952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235640 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.300 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.301 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 56901828 kB' 'MemUsed: 8757196 kB' 'SwapCached: 0 kB' 'Active: 5232036 kB' 'Inactive: 271816 kB' 'Active(anon): 4988844 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184764 kB' 'Mapped: 67940 kB' 'AnonPages: 322308 kB' 'Shmem: 4669756 kB' 'KernelStack: 14488 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330732 kB' 'Slab: 826948 kB' 'SReclaimable: 330732 kB' 'SUnreclaim: 496216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.302 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679864 kB' 'MemFree: 46375132 kB' 'MemUsed: 14304732 kB' 'SwapCached: 0 kB' 'Active: 6541488 kB' 'Inactive: 3510396 kB' 'Active(anon): 6276820 kB' 'Inactive(anon): 0 kB' 'Active(file): 264668 kB' 'Inactive(file): 3510396 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9753088 kB' 'Mapped: 157192 kB' 'AnonPages: 298776 kB' 'Shmem: 5978024 kB' 'KernelStack: 12904 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 318488 kB' 'Slab: 696092 kB' 'SReclaimable: 318488 kB' 'SUnreclaim: 377604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.303 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.304 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:18:39.305 node0=512 expecting 512 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:18:39.305 node1=1024 expecting 1024 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:18:39.305 00:18:39.305 real 0m3.663s 00:18:39.305 user 0m1.482s 00:18:39.305 sys 0m2.249s 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:39.305 11:27:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:18:39.305 ************************************ 00:18:39.305 END TEST custom_alloc 00:18:39.305 ************************************ 00:18:39.305 11:27:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:18:39.305 11:27:08 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:39.305 11:27:08 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:39.305 11:27:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:39.305 ************************************ 00:18:39.305 START TEST no_shrink_alloc 00:18:39.305 ************************************ 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:39.305 11:27:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:42.609 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:42.609 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:42.609 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:42.875 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:42.875 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.875 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104300636 kB' 'MemAvailable: 108198920 kB' 'Buffers: 3736 kB' 'Cached: 14934220 kB' 'SwapCached: 0 kB' 'Active: 11770864 kB' 'Inactive: 3782212 kB' 'Active(anon): 11263004 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618492 kB' 'Mapped: 224228 kB' 'Shmem: 10647884 kB' 'KReclaimable: 649156 kB' 'Slab: 1522980 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873824 kB' 'KernelStack: 27472 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12767920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.876 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104303740 kB' 'MemAvailable: 108202024 kB' 'Buffers: 3736 kB' 'Cached: 14934224 kB' 'SwapCached: 0 kB' 'Active: 11771160 kB' 'Inactive: 3782212 kB' 'Active(anon): 11263300 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618784 kB' 'Mapped: 224228 kB' 'Shmem: 10647888 kB' 'KReclaimable: 649156 kB' 'Slab: 1522972 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873816 kB' 'KernelStack: 27456 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12769456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.877 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.878 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104303828 kB' 'MemAvailable: 108202112 kB' 'Buffers: 3736 kB' 'Cached: 14934244 kB' 'SwapCached: 0 kB' 'Active: 11770992 kB' 'Inactive: 3782212 kB' 'Active(anon): 11263132 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618596 kB' 'Mapped: 224228 kB' 'Shmem: 10647908 kB' 'KReclaimable: 649156 kB' 'Slab: 1523028 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873872 kB' 'KernelStack: 27440 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12769360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.879 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.880 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:42.881 nr_hugepages=1024 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:42.881 resv_hugepages=0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:42.881 surplus_hugepages=0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:42.881 anon_hugepages=0 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104304468 kB' 'MemAvailable: 108202752 kB' 'Buffers: 3736 kB' 'Cached: 14934280 kB' 'SwapCached: 0 kB' 'Active: 11769936 kB' 'Inactive: 3782212 kB' 'Active(anon): 11262076 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617468 kB' 'Mapped: 224228 kB' 'Shmem: 10647944 kB' 'KReclaimable: 649156 kB' 'Slab: 1523028 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873872 kB' 'KernelStack: 27360 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12768816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.881 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.882 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55851792 kB' 'MemUsed: 9807232 kB' 'SwapCached: 0 kB' 'Active: 5233860 kB' 'Inactive: 271816 kB' 'Active(anon): 4990668 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5184872 kB' 'Mapped: 67036 kB' 'AnonPages: 324008 kB' 'Shmem: 4669864 kB' 'KernelStack: 14536 kB' 'PageTables: 5108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330668 kB' 'Slab: 826808 kB' 'SReclaimable: 330668 kB' 'SUnreclaim: 496140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.883 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.884 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:42.885 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:43.146 node0=1024 expecting 1024 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:18:43.146 11:27:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:46.455 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:18:46.455 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:18:46.455 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104329416 kB' 'MemAvailable: 108227700 kB' 'Buffers: 3736 kB' 'Cached: 14934376 kB' 'SwapCached: 0 kB' 'Active: 11769784 kB' 'Inactive: 3782212 kB' 'Active(anon): 11261924 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617184 kB' 'Mapped: 224268 kB' 'Shmem: 10648040 kB' 'KReclaimable: 649156 kB' 'Slab: 1522756 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873600 kB' 'KernelStack: 27392 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12767756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.455 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:46.456 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104332592 kB' 'MemAvailable: 108230876 kB' 'Buffers: 3736 kB' 'Cached: 14934376 kB' 'SwapCached: 0 kB' 'Active: 11769484 kB' 'Inactive: 3782212 kB' 'Active(anon): 11261624 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616916 kB' 'Mapped: 224268 kB' 'Shmem: 10648040 kB' 'KReclaimable: 649156 kB' 'Slab: 1522756 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873600 kB' 'KernelStack: 27376 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12767900 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.457 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.458 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104331728 kB' 'MemAvailable: 108230012 kB' 'Buffers: 3736 kB' 'Cached: 14934400 kB' 'SwapCached: 0 kB' 'Active: 11769188 kB' 'Inactive: 3782212 kB' 'Active(anon): 11261328 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616612 kB' 'Mapped: 224244 kB' 'Shmem: 10648064 kB' 'KReclaimable: 649156 kB' 'Slab: 1522800 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873644 kB' 'KernelStack: 27392 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12767928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.459 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:18:46.460 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:18:46.460 nr_hugepages=1024 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:18:46.461 resv_hugepages=0 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:18:46.461 surplus_hugepages=0 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:18:46.461 anon_hugepages=0 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338888 kB' 'MemFree: 104331224 kB' 'MemAvailable: 108229508 kB' 'Buffers: 3736 kB' 'Cached: 14934440 kB' 'SwapCached: 0 kB' 'Active: 11768868 kB' 'Inactive: 3782212 kB' 'Active(anon): 11261008 kB' 'Inactive(anon): 0 kB' 'Active(file): 507860 kB' 'Inactive(file): 3782212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 616224 kB' 'Mapped: 224244 kB' 'Shmem: 10648104 kB' 'KReclaimable: 649156 kB' 'Slab: 1522800 kB' 'SReclaimable: 649156 kB' 'SUnreclaim: 873644 kB' 'KernelStack: 27376 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509472 kB' 'Committed_AS: 12767952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 7583108 kB' 'DirectMap2M: 32991232 kB' 'DirectMap1G: 95420416 kB' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.461 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:18:46.462 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55863948 kB' 'MemUsed: 9795076 kB' 'SwapCached: 0 kB' 'Active: 5230968 kB' 'Inactive: 271816 kB' 'Active(anon): 4987776 kB' 'Inactive(anon): 0 kB' 'Active(file): 243192 kB' 'Inactive(file): 271816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5185012 kB' 'Mapped: 67052 kB' 'AnonPages: 320964 kB' 'Shmem: 4670004 kB' 'KernelStack: 14488 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330668 kB' 'Slab: 826552 kB' 'SReclaimable: 330668 kB' 'SUnreclaim: 495884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.463 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.725 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:18:46.726 node0=1024 expecting 1024 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:18:46.726 00:18:46.726 real 0m7.271s 00:18:46.726 user 0m2.860s 00:18:46.726 sys 0m4.539s 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:46.726 11:27:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:18:46.726 ************************************ 00:18:46.726 END TEST no_shrink_alloc 00:18:46.726 ************************************ 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:18:46.726 11:27:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:18:46.726 00:18:46.726 real 0m25.507s 00:18:46.726 user 0m9.920s 00:18:46.726 sys 0m15.893s 00:18:46.726 11:27:15 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:46.726 11:27:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:18:46.726 ************************************ 00:18:46.726 END TEST hugepages 00:18:46.726 ************************************ 00:18:46.726 11:27:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:18:46.726 11:27:15 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:46.726 11:27:15 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:46.726 11:27:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:18:46.726 ************************************ 00:18:46.726 START TEST driver 00:18:46.726 ************************************ 00:18:46.726 11:27:15 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:18:46.726 * Looking for test storage... 00:18:46.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:18:46.726 11:27:15 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:18:46.726 11:27:15 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:46.726 11:27:15 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:52.017 11:27:20 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:18:52.017 11:27:20 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:52.017 11:27:20 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:52.017 11:27:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:18:52.017 ************************************ 00:18:52.017 START TEST guess_driver 00:18:52.017 ************************************ 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:18:52.017 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:18:52.017 Looking for driver=vfio-pci 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:18:52.017 11:27:20 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.315 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:18:55.316 11:27:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:00.602 00:19:00.602 real 0m8.276s 00:19:00.602 user 0m2.802s 00:19:00.602 sys 0m4.699s 00:19:00.602 11:27:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.602 11:27:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:19:00.602 ************************************ 00:19:00.602 END TEST guess_driver 00:19:00.602 ************************************ 00:19:00.602 00:19:00.602 real 0m13.168s 00:19:00.602 user 0m4.295s 00:19:00.602 sys 0m7.350s 00:19:00.602 11:27:28 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.602 11:27:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:19:00.602 ************************************ 00:19:00.602 END TEST driver 00:19:00.602 ************************************ 00:19:00.602 11:27:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:19:00.602 11:27:28 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:00.602 11:27:28 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:00.602 11:27:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:19:00.602 ************************************ 00:19:00.602 START TEST devices 00:19:00.602 ************************************ 00:19:00.602 11:27:28 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:19:00.602 * Looking for test storage... 00:19:00.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:19:00.602 11:27:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:19:00.602 11:27:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:19:00.602 11:27:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:19:00.602 11:27:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:03.929 11:27:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:03.929 No valid GPT data, bailing 00:19:03.929 11:27:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:19:03.929 11:27:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:03.929 11:27:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:03.929 11:27:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:19:03.929 ************************************ 00:19:03.929 START TEST nvme_mount 00:19:03.929 ************************************ 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:19:03.929 11:27:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:19:04.878 Creating new GPT entries in memory. 00:19:04.878 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:04.878 other utilities. 00:19:04.878 11:27:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:19:04.878 11:27:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:04.878 11:27:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:04.878 11:27:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:04.878 11:27:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:19:06.263 Creating new GPT entries in memory. 00:19:06.263 The operation has completed successfully. 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2094642 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:19:06.263 11:27:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:09.566 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:09.566 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:19:09.566 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:19:09.566 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:09.566 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:19:09.566 11:27:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.869 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.870 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.870 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.870 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:12.870 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:12.870 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:19:13.131 11:27:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:16.432 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:16.432 00:19:16.432 real 0m12.510s 00:19:16.432 user 0m3.779s 00:19:16.432 sys 0m6.635s 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:16.432 11:27:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:19:16.432 ************************************ 00:19:16.432 END TEST nvme_mount 00:19:16.432 ************************************ 00:19:16.432 11:27:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:19:16.432 11:27:45 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:16.432 11:27:45 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:16.432 11:27:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:19:16.432 ************************************ 00:19:16.432 START TEST dm_mount 00:19:16.432 ************************************ 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:19:16.432 11:27:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:19:17.815 Creating new GPT entries in memory. 00:19:17.815 GPT data structures destroyed! You may now partition the disk using fdisk or 00:19:17.815 other utilities. 00:19:17.815 11:27:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:19:17.815 11:27:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:17.815 11:27:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:17.815 11:27:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:17.815 11:27:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:19:18.757 Creating new GPT entries in memory. 00:19:18.757 The operation has completed successfully. 00:19:18.757 11:27:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:19:18.757 11:27:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:18.757 11:27:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:19:18.757 11:27:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:19:18.757 11:27:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:19:19.698 The operation has completed successfully. 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2099783 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:19:19.698 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:19:19.699 11:27:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.001 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:19:23.002 11:27:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.302 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:19:26.303 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:19:26.563 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:19:26.563 00:19:26.563 real 0m9.950s 00:19:26.563 user 0m2.598s 00:19:26.563 sys 0m4.412s 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:26.563 11:27:55 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:19:26.563 ************************************ 00:19:26.563 END TEST dm_mount 00:19:26.563 ************************************ 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:19:26.563 11:27:55 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:19:26.824 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:19:26.824 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:19:26.825 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:19:26.825 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:19:26.825 11:27:55 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:19:26.825 00:19:26.825 real 0m26.865s 00:19:26.825 user 0m7.875s 00:19:26.825 sys 0m13.829s 00:19:26.825 11:27:55 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:26.825 11:27:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:19:26.825 ************************************ 00:19:26.825 END TEST devices 00:19:26.825 ************************************ 00:19:26.825 00:19:26.825 real 1m30.572s 00:19:26.825 user 0m30.184s 00:19:26.825 sys 0m51.827s 00:19:26.825 11:27:55 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:26.825 11:27:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:19:26.825 ************************************ 00:19:26.825 END TEST setup.sh 00:19:26.825 ************************************ 00:19:26.825 11:27:55 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:19:30.128 Hugepages 00:19:30.128 node hugesize free / total 00:19:30.128 node0 1048576kB 0 / 0 00:19:30.128 node0 2048kB 2048 / 2048 00:19:30.128 node1 1048576kB 0 / 0 00:19:30.128 node1 2048kB 0 / 0 00:19:30.128 00:19:30.129 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:30.129 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:19:30.129 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:19:30.390 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:19:30.390 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:19:30.390 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:19:30.390 11:27:59 -- spdk/autotest.sh@130 -- # uname -s 00:19:30.390 11:27:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:19:30.390 11:27:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:19:30.390 11:27:59 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:33.691 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:19:33.691 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:19:35.604 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:19:35.604 11:28:04 -- common/autotest_common.sh@1531 -- # sleep 1 00:19:36.546 11:28:05 -- common/autotest_common.sh@1532 -- # bdfs=() 00:19:36.546 11:28:05 -- common/autotest_common.sh@1532 -- # local bdfs 00:19:36.546 11:28:05 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:19:36.546 11:28:05 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:19:36.546 11:28:05 -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:36.546 11:28:05 -- common/autotest_common.sh@1512 -- # local bdfs 00:19:36.547 11:28:05 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:36.547 11:28:05 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:36.547 11:28:05 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:36.807 11:28:05 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:36.807 11:28:05 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:19:36.807 11:28:05 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:39.447 Waiting for block devices as requested 00:19:39.713 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:19:39.713 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:19:39.713 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:19:39.713 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:19:39.974 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:19:39.974 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:19:39.974 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:19:40.235 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:19:40.235 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:19:40.496 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:19:40.496 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:19:40.496 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:19:40.496 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:19:40.757 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:19:40.757 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:19:40.757 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:19:40.757 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:19:40.757 11:28:09 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:19:40.757 11:28:09 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:19:40.757 11:28:09 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:19:40.757 11:28:09 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:19:40.757 11:28:09 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:19:40.757 11:28:09 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:19:40.757 11:28:09 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:19:41.018 11:28:09 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:19:41.018 11:28:09 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:19:41.018 11:28:09 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:19:41.018 11:28:09 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:19:41.018 11:28:09 -- common/autotest_common.sh@1544 -- # grep oacs 00:19:41.018 11:28:09 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:19:41.018 11:28:09 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:19:41.018 11:28:09 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:19:41.018 11:28:09 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:19:41.018 11:28:09 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:19:41.018 11:28:09 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:19:41.018 11:28:09 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:19:41.018 11:28:09 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:19:41.018 11:28:09 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:19:41.018 11:28:09 -- common/autotest_common.sh@1556 -- # continue 00:19:41.018 11:28:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:19:41.018 11:28:09 -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:41.018 11:28:09 -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 11:28:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:19:41.018 11:28:09 -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:41.018 11:28:09 -- common/autotest_common.sh@10 -- # set +x 00:19:41.018 11:28:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:44.332 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:19:44.332 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:19:44.332 11:28:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:19:44.332 11:28:13 -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:44.332 11:28:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.332 11:28:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:19:44.332 11:28:13 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:19:44.332 11:28:13 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:19:44.332 11:28:13 -- common/autotest_common.sh@1576 -- # bdfs=() 00:19:44.332 11:28:13 -- common/autotest_common.sh@1576 -- # local bdfs 00:19:44.332 11:28:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:19:44.332 11:28:13 -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:44.332 11:28:13 -- common/autotest_common.sh@1512 -- # local bdfs 00:19:44.332 11:28:13 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:44.333 11:28:13 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:44.333 11:28:13 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:44.333 11:28:13 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:44.333 11:28:13 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:19:44.333 11:28:13 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:19:44.333 11:28:13 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:19:44.333 11:28:13 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:19:44.333 11:28:13 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:19:44.333 11:28:13 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:19:44.333 11:28:13 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:19:44.333 11:28:13 -- common/autotest_common.sh@1592 -- # return 0 00:19:44.333 11:28:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:19:44.333 11:28:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:19:44.333 11:28:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:19:44.333 11:28:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:19:44.333 11:28:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:19:44.333 11:28:13 -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:44.333 11:28:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.333 11:28:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:19:44.333 11:28:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:19:44.333 11:28:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:44.333 11:28:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:44.333 11:28:13 -- common/autotest_common.sh@10 -- # set +x 00:19:44.333 ************************************ 00:19:44.333 START TEST env 00:19:44.333 ************************************ 00:19:44.333 11:28:13 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:19:44.594 * Looking for test storage... 00:19:44.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:19:44.594 11:28:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:19:44.594 11:28:13 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:44.594 11:28:13 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:44.594 11:28:13 env -- common/autotest_common.sh@10 -- # set +x 00:19:44.594 ************************************ 00:19:44.594 START TEST env_memory 00:19:44.594 ************************************ 00:19:44.594 11:28:13 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:19:44.594 00:19:44.594 00:19:44.594 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.594 http://cunit.sourceforge.net/ 00:19:44.594 00:19:44.594 00:19:44.594 Suite: memory 00:19:44.594 Test: alloc and free memory map ...[2024-06-10 11:28:13.459481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:19:44.594 passed 00:19:44.594 Test: mem map translation ...[2024-06-10 11:28:13.485197] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:19:44.594 [2024-06-10 11:28:13.485227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:19:44.594 [2024-06-10 11:28:13.485273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:19:44.594 [2024-06-10 11:28:13.485281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:19:44.594 passed 00:19:44.594 Test: mem map registration ...[2024-06-10 11:28:13.540619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:19:44.594 [2024-06-10 11:28:13.540641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:19:44.594 passed 00:19:44.857 Test: mem map adjacent registrations ...passed 00:19:44.857 00:19:44.857 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.857 suites 1 1 n/a 0 0 00:19:44.857 tests 4 4 4 0 0 00:19:44.857 asserts 152 152 152 0 n/a 00:19:44.857 00:19:44.857 Elapsed time = 0.194 seconds 00:19:44.857 00:19:44.857 real 0m0.209s 00:19:44.857 user 0m0.194s 00:19:44.857 sys 0m0.013s 00:19:44.857 11:28:13 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:44.857 11:28:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:19:44.857 ************************************ 00:19:44.857 END TEST env_memory 00:19:44.857 ************************************ 00:19:44.857 11:28:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:19:44.857 11:28:13 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:44.857 11:28:13 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:44.857 11:28:13 env -- common/autotest_common.sh@10 -- # set +x 00:19:44.857 ************************************ 00:19:44.857 START TEST env_vtophys 00:19:44.857 ************************************ 00:19:44.857 11:28:13 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:19:44.857 EAL: lib.eal log level changed from notice to debug 00:19:44.857 EAL: Detected lcore 0 as core 0 on socket 0 00:19:44.857 EAL: Detected lcore 1 as core 1 on socket 0 00:19:44.857 EAL: Detected lcore 2 as core 2 on socket 0 00:19:44.857 EAL: Detected lcore 3 as core 3 on socket 0 00:19:44.857 EAL: Detected lcore 4 as core 4 on socket 0 00:19:44.857 EAL: Detected lcore 5 as core 5 on socket 0 00:19:44.857 EAL: Detected lcore 6 as core 6 on socket 0 00:19:44.857 EAL: Detected lcore 7 as core 7 on socket 0 00:19:44.857 EAL: Detected lcore 8 as core 8 on socket 0 00:19:44.857 EAL: Detected lcore 9 as core 9 on socket 0 00:19:44.857 EAL: Detected lcore 10 as core 10 on socket 0 00:19:44.857 EAL: Detected lcore 11 as core 11 on socket 0 00:19:44.857 EAL: Detected lcore 12 as core 12 on socket 0 00:19:44.857 EAL: Detected lcore 13 as core 13 on socket 0 00:19:44.857 EAL: Detected lcore 14 as core 14 on socket 0 00:19:44.857 EAL: Detected lcore 15 as core 15 on socket 0 00:19:44.857 EAL: Detected lcore 16 as core 16 on socket 0 00:19:44.857 EAL: Detected lcore 17 as core 17 on socket 0 00:19:44.857 EAL: Detected lcore 18 as core 18 on socket 0 00:19:44.857 EAL: Detected lcore 19 as core 19 on socket 0 00:19:44.857 EAL: Detected lcore 20 as core 20 on socket 0 00:19:44.857 EAL: Detected lcore 21 as core 21 on socket 0 00:19:44.857 EAL: Detected lcore 22 as core 22 on socket 0 00:19:44.857 EAL: Detected lcore 23 as core 23 on socket 0 00:19:44.857 EAL: Detected lcore 24 as core 24 on socket 0 00:19:44.857 EAL: Detected lcore 25 as core 25 on socket 0 00:19:44.857 EAL: Detected lcore 26 as core 26 on socket 0 00:19:44.857 EAL: Detected lcore 27 as core 27 on socket 0 00:19:44.857 EAL: Detected lcore 28 as core 28 on socket 0 00:19:44.857 EAL: Detected lcore 29 as core 29 on socket 0 00:19:44.857 EAL: Detected lcore 30 as core 30 on socket 0 00:19:44.857 EAL: Detected lcore 31 as core 31 on socket 0 00:19:44.857 EAL: Detected lcore 32 as core 32 on socket 0 00:19:44.857 EAL: Detected lcore 33 as core 33 on socket 0 00:19:44.857 EAL: Detected lcore 34 as core 34 on socket 0 00:19:44.857 EAL: Detected lcore 35 as core 35 on socket 0 00:19:44.857 EAL: Detected lcore 36 as core 0 on socket 1 00:19:44.857 EAL: Detected lcore 37 as core 1 on socket 1 00:19:44.857 EAL: Detected lcore 38 as core 2 on socket 1 00:19:44.857 EAL: Detected lcore 39 as core 3 on socket 1 00:19:44.857 EAL: Detected lcore 40 as core 4 on socket 1 00:19:44.857 EAL: Detected lcore 41 as core 5 on socket 1 00:19:44.857 EAL: Detected lcore 42 as core 6 on socket 1 00:19:44.857 EAL: Detected lcore 43 as core 7 on socket 1 00:19:44.857 EAL: Detected lcore 44 as core 8 on socket 1 00:19:44.857 EAL: Detected lcore 45 as core 9 on socket 1 00:19:44.857 EAL: Detected lcore 46 as core 10 on socket 1 00:19:44.857 EAL: Detected lcore 47 as core 11 on socket 1 00:19:44.857 EAL: Detected lcore 48 as core 12 on socket 1 00:19:44.857 EAL: Detected lcore 49 as core 13 on socket 1 00:19:44.857 EAL: Detected lcore 50 as core 14 on socket 1 00:19:44.857 EAL: Detected lcore 51 as core 15 on socket 1 00:19:44.857 EAL: Detected lcore 52 as core 16 on socket 1 00:19:44.857 EAL: Detected lcore 53 as core 17 on socket 1 00:19:44.857 EAL: Detected lcore 54 as core 18 on socket 1 00:19:44.857 EAL: Detected lcore 55 as core 19 on socket 1 00:19:44.857 EAL: Detected lcore 56 as core 20 on socket 1 00:19:44.857 EAL: Detected lcore 57 as core 21 on socket 1 00:19:44.857 EAL: Detected lcore 58 as core 22 on socket 1 00:19:44.857 EAL: Detected lcore 59 as core 23 on socket 1 00:19:44.857 EAL: Detected lcore 60 as core 24 on socket 1 00:19:44.857 EAL: Detected lcore 61 as core 25 on socket 1 00:19:44.857 EAL: Detected lcore 62 as core 26 on socket 1 00:19:44.857 EAL: Detected lcore 63 as core 27 on socket 1 00:19:44.857 EAL: Detected lcore 64 as core 28 on socket 1 00:19:44.857 EAL: Detected lcore 65 as core 29 on socket 1 00:19:44.857 EAL: Detected lcore 66 as core 30 on socket 1 00:19:44.857 EAL: Detected lcore 67 as core 31 on socket 1 00:19:44.857 EAL: Detected lcore 68 as core 32 on socket 1 00:19:44.857 EAL: Detected lcore 69 as core 33 on socket 1 00:19:44.857 EAL: Detected lcore 70 as core 34 on socket 1 00:19:44.857 EAL: Detected lcore 71 as core 35 on socket 1 00:19:44.857 EAL: Detected lcore 72 as core 0 on socket 0 00:19:44.857 EAL: Detected lcore 73 as core 1 on socket 0 00:19:44.857 EAL: Detected lcore 74 as core 2 on socket 0 00:19:44.857 EAL: Detected lcore 75 as core 3 on socket 0 00:19:44.857 EAL: Detected lcore 76 as core 4 on socket 0 00:19:44.857 EAL: Detected lcore 77 as core 5 on socket 0 00:19:44.857 EAL: Detected lcore 78 as core 6 on socket 0 00:19:44.857 EAL: Detected lcore 79 as core 7 on socket 0 00:19:44.857 EAL: Detected lcore 80 as core 8 on socket 0 00:19:44.857 EAL: Detected lcore 81 as core 9 on socket 0 00:19:44.857 EAL: Detected lcore 82 as core 10 on socket 0 00:19:44.857 EAL: Detected lcore 83 as core 11 on socket 0 00:19:44.857 EAL: Detected lcore 84 as core 12 on socket 0 00:19:44.857 EAL: Detected lcore 85 as core 13 on socket 0 00:19:44.857 EAL: Detected lcore 86 as core 14 on socket 0 00:19:44.857 EAL: Detected lcore 87 as core 15 on socket 0 00:19:44.857 EAL: Detected lcore 88 as core 16 on socket 0 00:19:44.857 EAL: Detected lcore 89 as core 17 on socket 0 00:19:44.857 EAL: Detected lcore 90 as core 18 on socket 0 00:19:44.857 EAL: Detected lcore 91 as core 19 on socket 0 00:19:44.857 EAL: Detected lcore 92 as core 20 on socket 0 00:19:44.857 EAL: Detected lcore 93 as core 21 on socket 0 00:19:44.857 EAL: Detected lcore 94 as core 22 on socket 0 00:19:44.857 EAL: Detected lcore 95 as core 23 on socket 0 00:19:44.857 EAL: Detected lcore 96 as core 24 on socket 0 00:19:44.857 EAL: Detected lcore 97 as core 25 on socket 0 00:19:44.857 EAL: Detected lcore 98 as core 26 on socket 0 00:19:44.857 EAL: Detected lcore 99 as core 27 on socket 0 00:19:44.857 EAL: Detected lcore 100 as core 28 on socket 0 00:19:44.857 EAL: Detected lcore 101 as core 29 on socket 0 00:19:44.857 EAL: Detected lcore 102 as core 30 on socket 0 00:19:44.857 EAL: Detected lcore 103 as core 31 on socket 0 00:19:44.857 EAL: Detected lcore 104 as core 32 on socket 0 00:19:44.858 EAL: Detected lcore 105 as core 33 on socket 0 00:19:44.858 EAL: Detected lcore 106 as core 34 on socket 0 00:19:44.858 EAL: Detected lcore 107 as core 35 on socket 0 00:19:44.858 EAL: Detected lcore 108 as core 0 on socket 1 00:19:44.858 EAL: Detected lcore 109 as core 1 on socket 1 00:19:44.858 EAL: Detected lcore 110 as core 2 on socket 1 00:19:44.858 EAL: Detected lcore 111 as core 3 on socket 1 00:19:44.858 EAL: Detected lcore 112 as core 4 on socket 1 00:19:44.858 EAL: Detected lcore 113 as core 5 on socket 1 00:19:44.858 EAL: Detected lcore 114 as core 6 on socket 1 00:19:44.858 EAL: Detected lcore 115 as core 7 on socket 1 00:19:44.858 EAL: Detected lcore 116 as core 8 on socket 1 00:19:44.858 EAL: Detected lcore 117 as core 9 on socket 1 00:19:44.858 EAL: Detected lcore 118 as core 10 on socket 1 00:19:44.858 EAL: Detected lcore 119 as core 11 on socket 1 00:19:44.858 EAL: Detected lcore 120 as core 12 on socket 1 00:19:44.858 EAL: Detected lcore 121 as core 13 on socket 1 00:19:44.858 EAL: Detected lcore 122 as core 14 on socket 1 00:19:44.858 EAL: Detected lcore 123 as core 15 on socket 1 00:19:44.858 EAL: Detected lcore 124 as core 16 on socket 1 00:19:44.858 EAL: Detected lcore 125 as core 17 on socket 1 00:19:44.858 EAL: Detected lcore 126 as core 18 on socket 1 00:19:44.858 EAL: Detected lcore 127 as core 19 on socket 1 00:19:44.858 EAL: Skipped lcore 128 as core 20 on socket 1 00:19:44.858 EAL: Skipped lcore 129 as core 21 on socket 1 00:19:44.858 EAL: Skipped lcore 130 as core 22 on socket 1 00:19:44.858 EAL: Skipped lcore 131 as core 23 on socket 1 00:19:44.858 EAL: Skipped lcore 132 as core 24 on socket 1 00:19:44.858 EAL: Skipped lcore 133 as core 25 on socket 1 00:19:44.858 EAL: Skipped lcore 134 as core 26 on socket 1 00:19:44.858 EAL: Skipped lcore 135 as core 27 on socket 1 00:19:44.858 EAL: Skipped lcore 136 as core 28 on socket 1 00:19:44.858 EAL: Skipped lcore 137 as core 29 on socket 1 00:19:44.858 EAL: Skipped lcore 138 as core 30 on socket 1 00:19:44.858 EAL: Skipped lcore 139 as core 31 on socket 1 00:19:44.858 EAL: Skipped lcore 140 as core 32 on socket 1 00:19:44.858 EAL: Skipped lcore 141 as core 33 on socket 1 00:19:44.858 EAL: Skipped lcore 142 as core 34 on socket 1 00:19:44.858 EAL: Skipped lcore 143 as core 35 on socket 1 00:19:44.858 EAL: Maximum logical cores by configuration: 128 00:19:44.858 EAL: Detected CPU lcores: 128 00:19:44.858 EAL: Detected NUMA nodes: 2 00:19:44.858 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:19:44.858 EAL: Detected shared linkage of DPDK 00:19:44.858 EAL: No shared files mode enabled, IPC will be disabled 00:19:44.858 EAL: Bus pci wants IOVA as 'DC' 00:19:44.858 EAL: Buses did not request a specific IOVA mode. 00:19:44.858 EAL: IOMMU is available, selecting IOVA as VA mode. 00:19:44.858 EAL: Selected IOVA mode 'VA' 00:19:44.858 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.858 EAL: Probing VFIO support... 00:19:44.858 EAL: IOMMU type 1 (Type 1) is supported 00:19:44.858 EAL: IOMMU type 7 (sPAPR) is not supported 00:19:44.858 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:19:44.858 EAL: VFIO support initialized 00:19:44.858 EAL: Ask a virtual area of 0x2e000 bytes 00:19:44.858 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:19:44.858 EAL: Setting up physically contiguous memory... 00:19:44.858 EAL: Setting maximum number of open files to 524288 00:19:44.858 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:19:44.858 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:19:44.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:19:44.858 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:19:44.858 EAL: Ask a virtual area of 0x61000 bytes 00:19:44.858 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:19:44.858 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:19:44.858 EAL: Ask a virtual area of 0x400000000 bytes 00:19:44.858 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:19:44.858 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:19:44.858 EAL: Hugepages will be freed exactly as allocated. 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: TSC frequency is ~2400000 KHz 00:19:44.858 EAL: Main lcore 0 is ready (tid=7efe98e9ba00;cpuset=[0]) 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 0 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 2MB 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: No PCI address specified using 'addr=' in: bus=pci 00:19:44.858 EAL: Mem event callback 'spdk:(nil)' registered 00:19:44.858 00:19:44.858 00:19:44.858 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.858 http://cunit.sourceforge.net/ 00:19:44.858 00:19:44.858 00:19:44.858 Suite: components_suite 00:19:44.858 Test: vtophys_malloc_test ...passed 00:19:44.858 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 4 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 4MB 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was shrunk by 4MB 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 4 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 6MB 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was shrunk by 6MB 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 4 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 10MB 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was shrunk by 10MB 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 4 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 18MB 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was shrunk by 18MB 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.858 EAL: Restoring previous memory policy: 4 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was expanded by 34MB 00:19:44.858 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.858 EAL: request: mp_malloc_sync 00:19:44.858 EAL: No shared files mode enabled, IPC is disabled 00:19:44.858 EAL: Heap on socket 0 was shrunk by 34MB 00:19:44.858 EAL: Trying to obtain current memory policy. 00:19:44.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:44.859 EAL: Restoring previous memory policy: 4 00:19:44.859 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.859 EAL: request: mp_malloc_sync 00:19:44.859 EAL: No shared files mode enabled, IPC is disabled 00:19:44.859 EAL: Heap on socket 0 was expanded by 66MB 00:19:44.859 EAL: Calling mem event callback 'spdk:(nil)' 00:19:44.859 EAL: request: mp_malloc_sync 00:19:44.859 EAL: No shared files mode enabled, IPC is disabled 00:19:44.859 EAL: Heap on socket 0 was shrunk by 66MB 00:19:44.859 EAL: Trying to obtain current memory policy. 00:19:44.859 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:45.119 EAL: Restoring previous memory policy: 4 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.119 EAL: request: mp_malloc_sync 00:19:45.119 EAL: No shared files mode enabled, IPC is disabled 00:19:45.119 EAL: Heap on socket 0 was expanded by 130MB 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.119 EAL: request: mp_malloc_sync 00:19:45.119 EAL: No shared files mode enabled, IPC is disabled 00:19:45.119 EAL: Heap on socket 0 was shrunk by 130MB 00:19:45.119 EAL: Trying to obtain current memory policy. 00:19:45.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:45.119 EAL: Restoring previous memory policy: 4 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.119 EAL: request: mp_malloc_sync 00:19:45.119 EAL: No shared files mode enabled, IPC is disabled 00:19:45.119 EAL: Heap on socket 0 was expanded by 258MB 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.119 EAL: request: mp_malloc_sync 00:19:45.119 EAL: No shared files mode enabled, IPC is disabled 00:19:45.119 EAL: Heap on socket 0 was shrunk by 258MB 00:19:45.119 EAL: Trying to obtain current memory policy. 00:19:45.119 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:45.119 EAL: Restoring previous memory policy: 4 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.119 EAL: request: mp_malloc_sync 00:19:45.119 EAL: No shared files mode enabled, IPC is disabled 00:19:45.119 EAL: Heap on socket 0 was expanded by 514MB 00:19:45.119 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.380 EAL: request: mp_malloc_sync 00:19:45.380 EAL: No shared files mode enabled, IPC is disabled 00:19:45.380 EAL: Heap on socket 0 was shrunk by 514MB 00:19:45.380 EAL: Trying to obtain current memory policy. 00:19:45.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:45.380 EAL: Restoring previous memory policy: 4 00:19:45.380 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.380 EAL: request: mp_malloc_sync 00:19:45.380 EAL: No shared files mode enabled, IPC is disabled 00:19:45.380 EAL: Heap on socket 0 was expanded by 1026MB 00:19:45.642 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.642 EAL: request: mp_malloc_sync 00:19:45.642 EAL: No shared files mode enabled, IPC is disabled 00:19:45.642 EAL: Heap on socket 0 was shrunk by 1026MB 00:19:45.642 passed 00:19:45.642 00:19:45.642 Run Summary: Type Total Ran Passed Failed Inactive 00:19:45.642 suites 1 1 n/a 0 0 00:19:45.642 tests 2 2 2 0 0 00:19:45.642 asserts 497 497 497 0 n/a 00:19:45.642 00:19:45.642 Elapsed time = 0.656 seconds 00:19:45.642 EAL: Calling mem event callback 'spdk:(nil)' 00:19:45.642 EAL: request: mp_malloc_sync 00:19:45.642 EAL: No shared files mode enabled, IPC is disabled 00:19:45.642 EAL: Heap on socket 0 was shrunk by 2MB 00:19:45.642 EAL: No shared files mode enabled, IPC is disabled 00:19:45.642 EAL: No shared files mode enabled, IPC is disabled 00:19:45.642 EAL: No shared files mode enabled, IPC is disabled 00:19:45.642 00:19:45.642 real 0m0.773s 00:19:45.642 user 0m0.404s 00:19:45.642 sys 0m0.347s 00:19:45.642 11:28:14 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:45.642 11:28:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:19:45.642 ************************************ 00:19:45.642 END TEST env_vtophys 00:19:45.642 ************************************ 00:19:45.642 11:28:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:19:45.642 11:28:14 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:45.642 11:28:14 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:45.642 11:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:19:45.642 ************************************ 00:19:45.642 START TEST env_pci 00:19:45.642 ************************************ 00:19:45.642 11:28:14 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:19:45.642 00:19:45.642 00:19:45.642 CUnit - A unit testing framework for C - Version 2.1-3 00:19:45.642 http://cunit.sourceforge.net/ 00:19:45.642 00:19:45.642 00:19:45.642 Suite: pci 00:19:45.642 Test: pci_hook ...[2024-06-10 11:28:14.560695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2110511 has claimed it 00:19:45.642 EAL: Cannot find device (10000:00:01.0) 00:19:45.642 EAL: Failed to attach device on primary process 00:19:45.642 passed 00:19:45.642 00:19:45.642 Run Summary: Type Total Ran Passed Failed Inactive 00:19:45.642 suites 1 1 n/a 0 0 00:19:45.642 tests 1 1 1 0 0 00:19:45.642 asserts 25 25 25 0 n/a 00:19:45.642 00:19:45.642 Elapsed time = 0.030 seconds 00:19:45.642 00:19:45.642 real 0m0.050s 00:19:45.642 user 0m0.011s 00:19:45.642 sys 0m0.039s 00:19:45.642 11:28:14 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:45.642 11:28:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:19:45.642 ************************************ 00:19:45.642 END TEST env_pci 00:19:45.642 ************************************ 00:19:45.903 11:28:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:19:45.903 11:28:14 env -- env/env.sh@15 -- # uname 00:19:45.903 11:28:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:19:45.903 11:28:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:19:45.903 11:28:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:45.903 11:28:14 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:19:45.903 11:28:14 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:45.903 11:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:19:45.903 ************************************ 00:19:45.903 START TEST env_dpdk_post_init 00:19:45.903 ************************************ 00:19:45.903 11:28:14 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:45.903 EAL: Detected CPU lcores: 128 00:19:45.903 EAL: Detected NUMA nodes: 2 00:19:45.903 EAL: Detected shared linkage of DPDK 00:19:45.903 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:45.903 EAL: Selected IOVA mode 'VA' 00:19:45.903 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.903 EAL: VFIO support initialized 00:19:45.903 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:45.903 EAL: Using IOMMU type 1 (Type 1) 00:19:46.164 EAL: Ignore mapping IO port bar(1) 00:19:46.164 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:19:46.164 EAL: Ignore mapping IO port bar(1) 00:19:46.426 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:19:46.426 EAL: Ignore mapping IO port bar(1) 00:19:46.687 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:19:46.687 EAL: Ignore mapping IO port bar(1) 00:19:46.947 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:19:46.947 EAL: Ignore mapping IO port bar(1) 00:19:46.947 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:19:47.207 EAL: Ignore mapping IO port bar(1) 00:19:47.207 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:19:47.468 EAL: Ignore mapping IO port bar(1) 00:19:47.468 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:19:47.729 EAL: Ignore mapping IO port bar(1) 00:19:47.729 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:19:47.989 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:19:47.989 EAL: Ignore mapping IO port bar(1) 00:19:48.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:19:48.250 EAL: Ignore mapping IO port bar(1) 00:19:48.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:19:48.510 EAL: Ignore mapping IO port bar(1) 00:19:48.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:19:48.770 EAL: Ignore mapping IO port bar(1) 00:19:48.770 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:19:49.031 EAL: Ignore mapping IO port bar(1) 00:19:49.031 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:19:49.291 EAL: Ignore mapping IO port bar(1) 00:19:49.291 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:19:49.291 EAL: Ignore mapping IO port bar(1) 00:19:49.552 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:19:49.552 EAL: Ignore mapping IO port bar(1) 00:19:49.812 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:19:49.812 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:19:49.812 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:19:49.812 Starting DPDK initialization... 00:19:49.812 Starting SPDK post initialization... 00:19:49.812 SPDK NVMe probe 00:19:49.812 Attaching to 0000:65:00.0 00:19:49.812 Attached to 0000:65:00.0 00:19:49.812 Cleaning up... 00:19:51.726 00:19:51.726 real 0m5.716s 00:19:51.726 user 0m0.180s 00:19:51.726 sys 0m0.075s 00:19:51.726 11:28:20 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:51.726 11:28:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:19:51.726 ************************************ 00:19:51.726 END TEST env_dpdk_post_init 00:19:51.726 ************************************ 00:19:51.726 11:28:20 env -- env/env.sh@26 -- # uname 00:19:51.726 11:28:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:19:51.726 11:28:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:19:51.726 11:28:20 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:51.726 11:28:20 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:51.726 11:28:20 env -- common/autotest_common.sh@10 -- # set +x 00:19:51.726 ************************************ 00:19:51.726 START TEST env_mem_callbacks 00:19:51.726 ************************************ 00:19:51.726 11:28:20 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:19:51.726 EAL: Detected CPU lcores: 128 00:19:51.726 EAL: Detected NUMA nodes: 2 00:19:51.726 EAL: Detected shared linkage of DPDK 00:19:51.726 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:51.726 EAL: Selected IOVA mode 'VA' 00:19:51.726 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.726 EAL: VFIO support initialized 00:19:51.726 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:51.726 00:19:51.726 00:19:51.726 CUnit - A unit testing framework for C - Version 2.1-3 00:19:51.726 http://cunit.sourceforge.net/ 00:19:51.726 00:19:51.726 00:19:51.726 Suite: memory 00:19:51.726 Test: test ... 00:19:51.726 register 0x200000200000 2097152 00:19:51.726 malloc 3145728 00:19:51.726 register 0x200000400000 4194304 00:19:51.726 buf 0x200000500000 len 3145728 PASSED 00:19:51.726 malloc 64 00:19:51.726 buf 0x2000004fff40 len 64 PASSED 00:19:51.726 malloc 4194304 00:19:51.726 register 0x200000800000 6291456 00:19:51.726 buf 0x200000a00000 len 4194304 PASSED 00:19:51.726 free 0x200000500000 3145728 00:19:51.726 free 0x2000004fff40 64 00:19:51.726 unregister 0x200000400000 4194304 PASSED 00:19:51.726 free 0x200000a00000 4194304 00:19:51.726 unregister 0x200000800000 6291456 PASSED 00:19:51.726 malloc 8388608 00:19:51.726 register 0x200000400000 10485760 00:19:51.726 buf 0x200000600000 len 8388608 PASSED 00:19:51.726 free 0x200000600000 8388608 00:19:51.726 unregister 0x200000400000 10485760 PASSED 00:19:51.726 passed 00:19:51.726 00:19:51.726 Run Summary: Type Total Ran Passed Failed Inactive 00:19:51.726 suites 1 1 n/a 0 0 00:19:51.726 tests 1 1 1 0 0 00:19:51.726 asserts 15 15 15 0 n/a 00:19:51.726 00:19:51.726 Elapsed time = 0.004 seconds 00:19:51.726 00:19:51.726 real 0m0.056s 00:19:51.726 user 0m0.020s 00:19:51.726 sys 0m0.037s 00:19:51.726 11:28:20 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:51.726 11:28:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:19:51.726 ************************************ 00:19:51.726 END TEST env_mem_callbacks 00:19:51.726 ************************************ 00:19:51.726 00:19:51.726 real 0m7.296s 00:19:51.726 user 0m0.974s 00:19:51.726 sys 0m0.864s 00:19:51.726 11:28:20 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:51.726 11:28:20 env -- common/autotest_common.sh@10 -- # set +x 00:19:51.726 ************************************ 00:19:51.726 END TEST env 00:19:51.726 ************************************ 00:19:51.726 11:28:20 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:19:51.726 11:28:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:51.726 11:28:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:51.726 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:19:51.726 ************************************ 00:19:51.726 START TEST rpc 00:19:51.726 ************************************ 00:19:51.726 11:28:20 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:19:51.986 * Looking for test storage... 00:19:51.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:19:51.986 11:28:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2111956 00:19:51.986 11:28:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:51.986 11:28:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:19:51.986 11:28:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2111956 00:19:51.986 11:28:20 rpc -- common/autotest_common.sh@830 -- # '[' -z 2111956 ']' 00:19:51.987 11:28:20 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.987 11:28:20 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:51.987 11:28:20 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.987 11:28:20 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:51.987 11:28:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:51.987 [2024-06-10 11:28:20.797911] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:19:51.987 [2024-06-10 11:28:20.797974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111956 ] 00:19:51.987 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.987 [2024-06-10 11:28:20.861244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.987 [2024-06-10 11:28:20.931705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:19:51.987 [2024-06-10 11:28:20.931742] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2111956' to capture a snapshot of events at runtime. 00:19:51.987 [2024-06-10 11:28:20.931750] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.987 [2024-06-10 11:28:20.931756] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.987 [2024-06-10 11:28:20.931762] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2111956 for offline analysis/debug. 00:19:51.987 [2024-06-10 11:28:20.931781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.247 11:28:21 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:52.247 11:28:21 rpc -- common/autotest_common.sh@863 -- # return 0 00:19:52.247 11:28:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:19:52.247 11:28:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:19:52.247 11:28:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:19:52.247 11:28:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:19:52.247 11:28:21 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:52.247 11:28:21 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:52.247 11:28:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 ************************************ 00:19:52.247 START TEST rpc_integrity 00:19:52.247 ************************************ 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:19:52.247 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:19:52.508 { 00:19:52.508 "name": "Malloc0", 00:19:52.508 "aliases": [ 00:19:52.508 "45fc5355-3a68-4d6b-b473-3a9dcfedf044" 00:19:52.508 ], 00:19:52.508 "product_name": "Malloc disk", 00:19:52.508 "block_size": 512, 00:19:52.508 "num_blocks": 16384, 00:19:52.508 "uuid": "45fc5355-3a68-4d6b-b473-3a9dcfedf044", 00:19:52.508 "assigned_rate_limits": { 00:19:52.508 "rw_ios_per_sec": 0, 00:19:52.508 "rw_mbytes_per_sec": 0, 00:19:52.508 "r_mbytes_per_sec": 0, 00:19:52.508 "w_mbytes_per_sec": 0 00:19:52.508 }, 00:19:52.508 "claimed": false, 00:19:52.508 "zoned": false, 00:19:52.508 "supported_io_types": { 00:19:52.508 "read": true, 00:19:52.508 "write": true, 00:19:52.508 "unmap": true, 00:19:52.508 "write_zeroes": true, 00:19:52.508 "flush": true, 00:19:52.508 "reset": true, 00:19:52.508 "compare": false, 00:19:52.508 "compare_and_write": false, 00:19:52.508 "abort": true, 00:19:52.508 "nvme_admin": false, 00:19:52.508 "nvme_io": false 00:19:52.508 }, 00:19:52.508 "memory_domains": [ 00:19:52.508 { 00:19:52.508 "dma_device_id": "system", 00:19:52.508 "dma_device_type": 1 00:19:52.508 }, 00:19:52.508 { 00:19:52.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.508 "dma_device_type": 2 00:19:52.508 } 00:19:52.508 ], 00:19:52.508 "driver_specific": {} 00:19:52.508 } 00:19:52.508 ]' 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.508 [2024-06-10 11:28:21.286259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:19:52.508 [2024-06-10 11:28:21.286291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.508 [2024-06-10 11:28:21.286304] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2157be0 00:19:52.508 [2024-06-10 11:28:21.286311] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.508 [2024-06-10 11:28:21.287637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.508 [2024-06-10 11:28:21.287657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:19:52.508 Passthru0 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.508 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.508 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:19:52.508 { 00:19:52.508 "name": "Malloc0", 00:19:52.508 "aliases": [ 00:19:52.508 "45fc5355-3a68-4d6b-b473-3a9dcfedf044" 00:19:52.508 ], 00:19:52.508 "product_name": "Malloc disk", 00:19:52.508 "block_size": 512, 00:19:52.508 "num_blocks": 16384, 00:19:52.509 "uuid": "45fc5355-3a68-4d6b-b473-3a9dcfedf044", 00:19:52.509 "assigned_rate_limits": { 00:19:52.509 "rw_ios_per_sec": 0, 00:19:52.509 "rw_mbytes_per_sec": 0, 00:19:52.509 "r_mbytes_per_sec": 0, 00:19:52.509 "w_mbytes_per_sec": 0 00:19:52.509 }, 00:19:52.509 "claimed": true, 00:19:52.509 "claim_type": "exclusive_write", 00:19:52.509 "zoned": false, 00:19:52.509 "supported_io_types": { 00:19:52.509 "read": true, 00:19:52.509 "write": true, 00:19:52.509 "unmap": true, 00:19:52.509 "write_zeroes": true, 00:19:52.509 "flush": true, 00:19:52.509 "reset": true, 00:19:52.509 "compare": false, 00:19:52.509 "compare_and_write": false, 00:19:52.509 "abort": true, 00:19:52.509 "nvme_admin": false, 00:19:52.509 "nvme_io": false 00:19:52.509 }, 00:19:52.509 "memory_domains": [ 00:19:52.509 { 00:19:52.509 "dma_device_id": "system", 00:19:52.509 "dma_device_type": 1 00:19:52.509 }, 00:19:52.509 { 00:19:52.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.509 "dma_device_type": 2 00:19:52.509 } 00:19:52.509 ], 00:19:52.509 "driver_specific": {} 00:19:52.509 }, 00:19:52.509 { 00:19:52.509 "name": "Passthru0", 00:19:52.509 "aliases": [ 00:19:52.509 "c7efd3f7-752b-560b-8aa3-beb8457d4945" 00:19:52.509 ], 00:19:52.509 "product_name": "passthru", 00:19:52.509 "block_size": 512, 00:19:52.509 "num_blocks": 16384, 00:19:52.509 "uuid": "c7efd3f7-752b-560b-8aa3-beb8457d4945", 00:19:52.509 "assigned_rate_limits": { 00:19:52.509 "rw_ios_per_sec": 0, 00:19:52.509 "rw_mbytes_per_sec": 0, 00:19:52.509 "r_mbytes_per_sec": 0, 00:19:52.509 "w_mbytes_per_sec": 0 00:19:52.509 }, 00:19:52.509 "claimed": false, 00:19:52.509 "zoned": false, 00:19:52.509 "supported_io_types": { 00:19:52.509 "read": true, 00:19:52.509 "write": true, 00:19:52.509 "unmap": true, 00:19:52.509 "write_zeroes": true, 00:19:52.509 "flush": true, 00:19:52.509 "reset": true, 00:19:52.509 "compare": false, 00:19:52.509 "compare_and_write": false, 00:19:52.509 "abort": true, 00:19:52.509 "nvme_admin": false, 00:19:52.509 "nvme_io": false 00:19:52.509 }, 00:19:52.509 "memory_domains": [ 00:19:52.509 { 00:19:52.509 "dma_device_id": "system", 00:19:52.509 "dma_device_type": 1 00:19:52.509 }, 00:19:52.509 { 00:19:52.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.509 "dma_device_type": 2 00:19:52.509 } 00:19:52.509 ], 00:19:52.509 "driver_specific": { 00:19:52.509 "passthru": { 00:19:52.509 "name": "Passthru0", 00:19:52.509 "base_bdev_name": "Malloc0" 00:19:52.509 } 00:19:52.509 } 00:19:52.509 } 00:19:52.509 ]' 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:19:52.509 11:28:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:19:52.509 00:19:52.509 real 0m0.297s 00:19:52.509 user 0m0.186s 00:19:52.509 sys 0m0.042s 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:52.509 11:28:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:52.509 ************************************ 00:19:52.509 END TEST rpc_integrity 00:19:52.509 ************************************ 00:19:52.509 11:28:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:19:52.509 11:28:21 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:52.509 11:28:21 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:52.509 11:28:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 ************************************ 00:19:52.810 START TEST rpc_plugins 00:19:52.810 ************************************ 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:19:52.810 { 00:19:52.810 "name": "Malloc1", 00:19:52.810 "aliases": [ 00:19:52.810 "80a72690-9634-4f23-967d-edba9213e745" 00:19:52.810 ], 00:19:52.810 "product_name": "Malloc disk", 00:19:52.810 "block_size": 4096, 00:19:52.810 "num_blocks": 256, 00:19:52.810 "uuid": "80a72690-9634-4f23-967d-edba9213e745", 00:19:52.810 "assigned_rate_limits": { 00:19:52.810 "rw_ios_per_sec": 0, 00:19:52.810 "rw_mbytes_per_sec": 0, 00:19:52.810 "r_mbytes_per_sec": 0, 00:19:52.810 "w_mbytes_per_sec": 0 00:19:52.810 }, 00:19:52.810 "claimed": false, 00:19:52.810 "zoned": false, 00:19:52.810 "supported_io_types": { 00:19:52.810 "read": true, 00:19:52.810 "write": true, 00:19:52.810 "unmap": true, 00:19:52.810 "write_zeroes": true, 00:19:52.810 "flush": true, 00:19:52.810 "reset": true, 00:19:52.810 "compare": false, 00:19:52.810 "compare_and_write": false, 00:19:52.810 "abort": true, 00:19:52.810 "nvme_admin": false, 00:19:52.810 "nvme_io": false 00:19:52.810 }, 00:19:52.810 "memory_domains": [ 00:19:52.810 { 00:19:52.810 "dma_device_id": "system", 00:19:52.810 "dma_device_type": 1 00:19:52.810 }, 00:19:52.810 { 00:19:52.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.810 "dma_device_type": 2 00:19:52.810 } 00:19:52.810 ], 00:19:52.810 "driver_specific": {} 00:19:52.810 } 00:19:52.810 ]' 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:19:52.810 11:28:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:19:52.810 00:19:52.810 real 0m0.152s 00:19:52.810 user 0m0.089s 00:19:52.810 sys 0m0.024s 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 ************************************ 00:19:52.810 END TEST rpc_plugins 00:19:52.810 ************************************ 00:19:52.810 11:28:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:19:52.810 11:28:21 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:52.810 11:28:21 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:52.810 11:28:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 ************************************ 00:19:52.810 START TEST rpc_trace_cmd_test 00:19:52.810 ************************************ 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.810 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.811 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:19:52.811 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2111956", 00:19:52.811 "tpoint_group_mask": "0x8", 00:19:52.811 "iscsi_conn": { 00:19:52.811 "mask": "0x2", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "scsi": { 00:19:52.811 "mask": "0x4", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "bdev": { 00:19:52.811 "mask": "0x8", 00:19:52.811 "tpoint_mask": "0xffffffffffffffff" 00:19:52.811 }, 00:19:52.811 "nvmf_rdma": { 00:19:52.811 "mask": "0x10", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "nvmf_tcp": { 00:19:52.811 "mask": "0x20", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "ftl": { 00:19:52.811 "mask": "0x40", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "blobfs": { 00:19:52.811 "mask": "0x80", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "dsa": { 00:19:52.811 "mask": "0x200", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "thread": { 00:19:52.811 "mask": "0x400", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "nvme_pcie": { 00:19:52.811 "mask": "0x800", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "iaa": { 00:19:52.811 "mask": "0x1000", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "nvme_tcp": { 00:19:52.811 "mask": "0x2000", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "bdev_nvme": { 00:19:52.811 "mask": "0x4000", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 }, 00:19:52.811 "sock": { 00:19:52.811 "mask": "0x8000", 00:19:52.811 "tpoint_mask": "0x0" 00:19:52.811 } 00:19:52.811 }' 00:19:52.811 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:19:53.072 00:19:53.072 real 0m0.248s 00:19:53.072 user 0m0.205s 00:19:53.072 sys 0m0.035s 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:53.072 11:28:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.072 ************************************ 00:19:53.072 END TEST rpc_trace_cmd_test 00:19:53.072 ************************************ 00:19:53.072 11:28:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:19:53.072 11:28:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:19:53.072 11:28:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:19:53.072 11:28:22 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:53.072 11:28:22 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:53.072 11:28:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 ************************************ 00:19:53.333 START TEST rpc_daemon_integrity 00:19:53.333 ************************************ 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:19:53.333 { 00:19:53.333 "name": "Malloc2", 00:19:53.333 "aliases": [ 00:19:53.333 "15551f95-c015-41e8-9b4b-a0f50d6e9db4" 00:19:53.333 ], 00:19:53.333 "product_name": "Malloc disk", 00:19:53.333 "block_size": 512, 00:19:53.333 "num_blocks": 16384, 00:19:53.333 "uuid": "15551f95-c015-41e8-9b4b-a0f50d6e9db4", 00:19:53.333 "assigned_rate_limits": { 00:19:53.333 "rw_ios_per_sec": 0, 00:19:53.333 "rw_mbytes_per_sec": 0, 00:19:53.333 "r_mbytes_per_sec": 0, 00:19:53.333 "w_mbytes_per_sec": 0 00:19:53.333 }, 00:19:53.333 "claimed": false, 00:19:53.333 "zoned": false, 00:19:53.333 "supported_io_types": { 00:19:53.333 "read": true, 00:19:53.333 "write": true, 00:19:53.333 "unmap": true, 00:19:53.333 "write_zeroes": true, 00:19:53.333 "flush": true, 00:19:53.333 "reset": true, 00:19:53.333 "compare": false, 00:19:53.333 "compare_and_write": false, 00:19:53.333 "abort": true, 00:19:53.333 "nvme_admin": false, 00:19:53.333 "nvme_io": false 00:19:53.333 }, 00:19:53.333 "memory_domains": [ 00:19:53.333 { 00:19:53.333 "dma_device_id": "system", 00:19:53.333 "dma_device_type": 1 00:19:53.333 }, 00:19:53.333 { 00:19:53.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.333 "dma_device_type": 2 00:19:53.333 } 00:19:53.333 ], 00:19:53.333 "driver_specific": {} 00:19:53.333 } 00:19:53.333 ]' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 [2024-06-10 11:28:22.204727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:19:53.333 [2024-06-10 11:28:22.204756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.333 [2024-06-10 11:28:22.204770] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x214f4b0 00:19:53.333 [2024-06-10 11:28:22.204776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.333 [2024-06-10 11:28:22.205987] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.333 [2024-06-10 11:28:22.206007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:19:53.333 Passthru0 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:19:53.333 { 00:19:53.333 "name": "Malloc2", 00:19:53.333 "aliases": [ 00:19:53.333 "15551f95-c015-41e8-9b4b-a0f50d6e9db4" 00:19:53.333 ], 00:19:53.333 "product_name": "Malloc disk", 00:19:53.333 "block_size": 512, 00:19:53.333 "num_blocks": 16384, 00:19:53.333 "uuid": "15551f95-c015-41e8-9b4b-a0f50d6e9db4", 00:19:53.333 "assigned_rate_limits": { 00:19:53.333 "rw_ios_per_sec": 0, 00:19:53.333 "rw_mbytes_per_sec": 0, 00:19:53.333 "r_mbytes_per_sec": 0, 00:19:53.333 "w_mbytes_per_sec": 0 00:19:53.333 }, 00:19:53.333 "claimed": true, 00:19:53.333 "claim_type": "exclusive_write", 00:19:53.333 "zoned": false, 00:19:53.333 "supported_io_types": { 00:19:53.333 "read": true, 00:19:53.333 "write": true, 00:19:53.333 "unmap": true, 00:19:53.333 "write_zeroes": true, 00:19:53.333 "flush": true, 00:19:53.333 "reset": true, 00:19:53.333 "compare": false, 00:19:53.333 "compare_and_write": false, 00:19:53.333 "abort": true, 00:19:53.333 "nvme_admin": false, 00:19:53.333 "nvme_io": false 00:19:53.333 }, 00:19:53.333 "memory_domains": [ 00:19:53.333 { 00:19:53.333 "dma_device_id": "system", 00:19:53.333 "dma_device_type": 1 00:19:53.333 }, 00:19:53.333 { 00:19:53.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.333 "dma_device_type": 2 00:19:53.333 } 00:19:53.333 ], 00:19:53.333 "driver_specific": {} 00:19:53.333 }, 00:19:53.333 { 00:19:53.333 "name": "Passthru0", 00:19:53.333 "aliases": [ 00:19:53.333 "33b18773-ea5f-5d1b-ab98-98e36c86a78a" 00:19:53.333 ], 00:19:53.333 "product_name": "passthru", 00:19:53.333 "block_size": 512, 00:19:53.333 "num_blocks": 16384, 00:19:53.333 "uuid": "33b18773-ea5f-5d1b-ab98-98e36c86a78a", 00:19:53.333 "assigned_rate_limits": { 00:19:53.333 "rw_ios_per_sec": 0, 00:19:53.333 "rw_mbytes_per_sec": 0, 00:19:53.333 "r_mbytes_per_sec": 0, 00:19:53.333 "w_mbytes_per_sec": 0 00:19:53.333 }, 00:19:53.333 "claimed": false, 00:19:53.333 "zoned": false, 00:19:53.333 "supported_io_types": { 00:19:53.333 "read": true, 00:19:53.333 "write": true, 00:19:53.333 "unmap": true, 00:19:53.333 "write_zeroes": true, 00:19:53.333 "flush": true, 00:19:53.333 "reset": true, 00:19:53.333 "compare": false, 00:19:53.333 "compare_and_write": false, 00:19:53.333 "abort": true, 00:19:53.333 "nvme_admin": false, 00:19:53.333 "nvme_io": false 00:19:53.333 }, 00:19:53.333 "memory_domains": [ 00:19:53.333 { 00:19:53.333 "dma_device_id": "system", 00:19:53.333 "dma_device_type": 1 00:19:53.333 }, 00:19:53.333 { 00:19:53.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.333 "dma_device_type": 2 00:19:53.333 } 00:19:53.333 ], 00:19:53.333 "driver_specific": { 00:19:53.333 "passthru": { 00:19:53.333 "name": "Passthru0", 00:19:53.333 "base_bdev_name": "Malloc2" 00:19:53.333 } 00:19:53.333 } 00:19:53.333 } 00:19:53.333 ]' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.333 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.334 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:19:53.594 00:19:53.594 real 0m0.288s 00:19:53.594 user 0m0.181s 00:19:53.594 sys 0m0.042s 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:53.594 11:28:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:53.594 ************************************ 00:19:53.594 END TEST rpc_daemon_integrity 00:19:53.594 ************************************ 00:19:53.594 11:28:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:53.594 11:28:22 rpc -- rpc/rpc.sh@84 -- # killprocess 2111956 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@949 -- # '[' -z 2111956 ']' 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@953 -- # kill -0 2111956 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@954 -- # uname 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2111956 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2111956' 00:19:53.594 killing process with pid 2111956 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@968 -- # kill 2111956 00:19:53.594 11:28:22 rpc -- common/autotest_common.sh@973 -- # wait 2111956 00:19:53.856 00:19:53.856 real 0m2.017s 00:19:53.856 user 0m2.703s 00:19:53.856 sys 0m0.697s 00:19:53.856 11:28:22 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:53.856 11:28:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.856 ************************************ 00:19:53.856 END TEST rpc 00:19:53.856 ************************************ 00:19:53.856 11:28:22 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:19:53.856 11:28:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:53.856 11:28:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:53.856 11:28:22 -- common/autotest_common.sh@10 -- # set +x 00:19:53.856 ************************************ 00:19:53.856 START TEST skip_rpc 00:19:53.856 ************************************ 00:19:53.856 11:28:22 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:19:53.856 * Looking for test storage... 00:19:54.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:19:54.118 11:28:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:19:54.118 11:28:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:19:54.118 11:28:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:19:54.118 11:28:22 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:54.118 11:28:22 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:54.118 11:28:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:54.118 ************************************ 00:19:54.118 START TEST skip_rpc 00:19:54.118 ************************************ 00:19:54.118 11:28:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:19:54.118 11:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2112473 00:19:54.118 11:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:54.118 11:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:19:54.118 11:28:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:19:54.118 [2024-06-10 11:28:22.942787] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:19:54.118 [2024-06-10 11:28:22.942861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112473 ] 00:19:54.118 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.118 [2024-06-10 11:28:23.008327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.118 [2024-06-10 11:28:23.084681] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.405 11:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:19:59.405 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2112473 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 2112473 ']' 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 2112473 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2112473 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2112473' 00:19:59.406 killing process with pid 2112473 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 2112473 00:19:59.406 11:28:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 2112473 00:19:59.406 00:19:59.406 real 0m5.279s 00:19:59.406 user 0m5.073s 00:19:59.406 sys 0m0.242s 00:19:59.406 11:28:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:59.406 11:28:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:59.406 ************************************ 00:19:59.406 END TEST skip_rpc 00:19:59.406 ************************************ 00:19:59.406 11:28:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:19:59.406 11:28:28 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:59.406 11:28:28 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:59.406 11:28:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:59.406 ************************************ 00:19:59.406 START TEST skip_rpc_with_json 00:19:59.406 ************************************ 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2113649 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2113649 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 2113649 ']' 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:59.406 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:59.406 [2024-06-10 11:28:28.286166] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:19:59.406 [2024-06-10 11:28:28.286216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113649 ] 00:19:59.406 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.406 [2024-06-10 11:28:28.346571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.666 [2024-06-10 11:28:28.412480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.666 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 [2024-06-10 11:28:28.591911] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:19:59.667 request: 00:19:59.667 { 00:19:59.667 "trtype": "tcp", 00:19:59.667 "method": "nvmf_get_transports", 00:19:59.667 "req_id": 1 00:19:59.667 } 00:19:59.667 Got JSON-RPC error response 00:19:59.667 response: 00:19:59.667 { 00:19:59.667 "code": -19, 00:19:59.667 "message": "No such device" 00:19:59.667 } 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 [2024-06-10 11:28:28.604026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.667 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:59.928 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.928 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:19:59.928 { 00:19:59.928 "subsystems": [ 00:19:59.928 { 00:19:59.928 "subsystem": "vfio_user_target", 00:19:59.928 "config": null 00:19:59.928 }, 00:19:59.928 { 00:19:59.928 "subsystem": "keyring", 00:19:59.928 "config": [] 00:19:59.928 }, 00:19:59.928 { 00:19:59.928 "subsystem": "iobuf", 00:19:59.928 "config": [ 00:19:59.928 { 00:19:59.928 "method": "iobuf_set_options", 00:19:59.928 "params": { 00:19:59.928 "small_pool_count": 8192, 00:19:59.928 "large_pool_count": 1024, 00:19:59.928 "small_bufsize": 8192, 00:19:59.928 "large_bufsize": 135168 00:19:59.928 } 00:19:59.928 } 00:19:59.928 ] 00:19:59.928 }, 00:19:59.928 { 00:19:59.928 "subsystem": "sock", 00:19:59.928 "config": [ 00:19:59.928 { 00:19:59.928 "method": "sock_set_default_impl", 00:19:59.928 "params": { 00:19:59.928 "impl_name": "posix" 00:19:59.928 } 00:19:59.928 }, 00:19:59.928 { 00:19:59.928 "method": "sock_impl_set_options", 00:19:59.928 "params": { 00:19:59.928 "impl_name": "ssl", 00:19:59.928 "recv_buf_size": 4096, 00:19:59.928 "send_buf_size": 4096, 00:19:59.928 "enable_recv_pipe": true, 00:19:59.928 "enable_quickack": false, 00:19:59.928 "enable_placement_id": 0, 00:19:59.928 "enable_zerocopy_send_server": true, 00:19:59.928 "enable_zerocopy_send_client": false, 00:19:59.928 "zerocopy_threshold": 0, 00:19:59.928 "tls_version": 0, 00:19:59.928 "enable_ktls": false 00:19:59.928 } 00:19:59.928 }, 00:19:59.928 { 00:19:59.928 "method": "sock_impl_set_options", 00:19:59.928 "params": { 00:19:59.928 "impl_name": "posix", 00:19:59.929 "recv_buf_size": 2097152, 00:19:59.929 "send_buf_size": 2097152, 00:19:59.929 "enable_recv_pipe": true, 00:19:59.929 "enable_quickack": false, 00:19:59.929 "enable_placement_id": 0, 00:19:59.929 "enable_zerocopy_send_server": true, 00:19:59.929 "enable_zerocopy_send_client": false, 00:19:59.929 "zerocopy_threshold": 0, 00:19:59.929 "tls_version": 0, 00:19:59.929 "enable_ktls": false 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "vmd", 00:19:59.929 "config": [] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "accel", 00:19:59.929 "config": [ 00:19:59.929 { 00:19:59.929 "method": "accel_set_options", 00:19:59.929 "params": { 00:19:59.929 "small_cache_size": 128, 00:19:59.929 "large_cache_size": 16, 00:19:59.929 "task_count": 2048, 00:19:59.929 "sequence_count": 2048, 00:19:59.929 "buf_count": 2048 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "bdev", 00:19:59.929 "config": [ 00:19:59.929 { 00:19:59.929 "method": "bdev_set_options", 00:19:59.929 "params": { 00:19:59.929 "bdev_io_pool_size": 65535, 00:19:59.929 "bdev_io_cache_size": 256, 00:19:59.929 "bdev_auto_examine": true, 00:19:59.929 "iobuf_small_cache_size": 128, 00:19:59.929 "iobuf_large_cache_size": 16 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "bdev_raid_set_options", 00:19:59.929 "params": { 00:19:59.929 "process_window_size_kb": 1024 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "bdev_iscsi_set_options", 00:19:59.929 "params": { 00:19:59.929 "timeout_sec": 30 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "bdev_nvme_set_options", 00:19:59.929 "params": { 00:19:59.929 "action_on_timeout": "none", 00:19:59.929 "timeout_us": 0, 00:19:59.929 "timeout_admin_us": 0, 00:19:59.929 "keep_alive_timeout_ms": 10000, 00:19:59.929 "arbitration_burst": 0, 00:19:59.929 "low_priority_weight": 0, 00:19:59.929 "medium_priority_weight": 0, 00:19:59.929 "high_priority_weight": 0, 00:19:59.929 "nvme_adminq_poll_period_us": 10000, 00:19:59.929 "nvme_ioq_poll_period_us": 0, 00:19:59.929 "io_queue_requests": 0, 00:19:59.929 "delay_cmd_submit": true, 00:19:59.929 "transport_retry_count": 4, 00:19:59.929 "bdev_retry_count": 3, 00:19:59.929 "transport_ack_timeout": 0, 00:19:59.929 "ctrlr_loss_timeout_sec": 0, 00:19:59.929 "reconnect_delay_sec": 0, 00:19:59.929 "fast_io_fail_timeout_sec": 0, 00:19:59.929 "disable_auto_failback": false, 00:19:59.929 "generate_uuids": false, 00:19:59.929 "transport_tos": 0, 00:19:59.929 "nvme_error_stat": false, 00:19:59.929 "rdma_srq_size": 0, 00:19:59.929 "io_path_stat": false, 00:19:59.929 "allow_accel_sequence": false, 00:19:59.929 "rdma_max_cq_size": 0, 00:19:59.929 "rdma_cm_event_timeout_ms": 0, 00:19:59.929 "dhchap_digests": [ 00:19:59.929 "sha256", 00:19:59.929 "sha384", 00:19:59.929 "sha512" 00:19:59.929 ], 00:19:59.929 "dhchap_dhgroups": [ 00:19:59.929 "null", 00:19:59.929 "ffdhe2048", 00:19:59.929 "ffdhe3072", 00:19:59.929 "ffdhe4096", 00:19:59.929 "ffdhe6144", 00:19:59.929 "ffdhe8192" 00:19:59.929 ] 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "bdev_nvme_set_hotplug", 00:19:59.929 "params": { 00:19:59.929 "period_us": 100000, 00:19:59.929 "enable": false 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "bdev_wait_for_examine" 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "scsi", 00:19:59.929 "config": null 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "scheduler", 00:19:59.929 "config": [ 00:19:59.929 { 00:19:59.929 "method": "framework_set_scheduler", 00:19:59.929 "params": { 00:19:59.929 "name": "static" 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "vhost_scsi", 00:19:59.929 "config": [] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "vhost_blk", 00:19:59.929 "config": [] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "ublk", 00:19:59.929 "config": [] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "nbd", 00:19:59.929 "config": [] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "nvmf", 00:19:59.929 "config": [ 00:19:59.929 { 00:19:59.929 "method": "nvmf_set_config", 00:19:59.929 "params": { 00:19:59.929 "discovery_filter": "match_any", 00:19:59.929 "admin_cmd_passthru": { 00:19:59.929 "identify_ctrlr": false 00:19:59.929 } 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "nvmf_set_max_subsystems", 00:19:59.929 "params": { 00:19:59.929 "max_subsystems": 1024 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "nvmf_set_crdt", 00:19:59.929 "params": { 00:19:59.929 "crdt1": 0, 00:19:59.929 "crdt2": 0, 00:19:59.929 "crdt3": 0 00:19:59.929 } 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "method": "nvmf_create_transport", 00:19:59.929 "params": { 00:19:59.929 "trtype": "TCP", 00:19:59.929 "max_queue_depth": 128, 00:19:59.929 "max_io_qpairs_per_ctrlr": 127, 00:19:59.929 "in_capsule_data_size": 4096, 00:19:59.929 "max_io_size": 131072, 00:19:59.929 "io_unit_size": 131072, 00:19:59.929 "max_aq_depth": 128, 00:19:59.929 "num_shared_buffers": 511, 00:19:59.929 "buf_cache_size": 4294967295, 00:19:59.929 "dif_insert_or_strip": false, 00:19:59.929 "zcopy": false, 00:19:59.929 "c2h_success": true, 00:19:59.929 "sock_priority": 0, 00:19:59.929 "abort_timeout_sec": 1, 00:19:59.929 "ack_timeout": 0, 00:19:59.929 "data_wr_pool_size": 0 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 }, 00:19:59.929 { 00:19:59.929 "subsystem": "iscsi", 00:19:59.929 "config": [ 00:19:59.929 { 00:19:59.929 "method": "iscsi_set_options", 00:19:59.929 "params": { 00:19:59.929 "node_base": "iqn.2016-06.io.spdk", 00:19:59.929 "max_sessions": 128, 00:19:59.929 "max_connections_per_session": 2, 00:19:59.929 "max_queue_depth": 64, 00:19:59.929 "default_time2wait": 2, 00:19:59.929 "default_time2retain": 20, 00:19:59.929 "first_burst_length": 8192, 00:19:59.929 "immediate_data": true, 00:19:59.929 "allow_duplicated_isid": false, 00:19:59.929 "error_recovery_level": 0, 00:19:59.929 "nop_timeout": 60, 00:19:59.929 "nop_in_interval": 30, 00:19:59.929 "disable_chap": false, 00:19:59.929 "require_chap": false, 00:19:59.929 "mutual_chap": false, 00:19:59.929 "chap_group": 0, 00:19:59.929 "max_large_datain_per_connection": 64, 00:19:59.929 "max_r2t_per_connection": 4, 00:19:59.929 "pdu_pool_size": 36864, 00:19:59.929 "immediate_data_pool_size": 16384, 00:19:59.929 "data_out_pool_size": 2048 00:19:59.929 } 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 } 00:19:59.929 ] 00:19:59.929 } 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2113649 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2113649 ']' 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2113649 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2113649 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2113649' 00:19:59.929 killing process with pid 2113649 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2113649 00:19:59.929 11:28:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2113649 00:20:00.191 11:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2113842 00:20:00.191 11:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:20:00.191 11:28:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2113842 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 2113842 ']' 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 2113842 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2113842 00:20:05.480 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2113842' 00:20:05.481 killing process with pid 2113842 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 2113842 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 2113842 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:20:05.481 00:20:05.481 real 0m6.080s 00:20:05.481 user 0m5.909s 00:20:05.481 sys 0m0.490s 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:05.481 ************************************ 00:20:05.481 END TEST skip_rpc_with_json 00:20:05.481 ************************************ 00:20:05.481 11:28:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:20:05.481 11:28:34 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:05.481 11:28:34 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:05.481 11:28:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.481 ************************************ 00:20:05.481 START TEST skip_rpc_with_delay 00:20:05.481 ************************************ 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:20:05.481 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:05.481 [2024-06-10 11:28:34.449320] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:20:05.481 [2024-06-10 11:28:34.449401] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:05.742 00:20:05.742 real 0m0.074s 00:20:05.742 user 0m0.049s 00:20:05.742 sys 0m0.025s 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:05.742 11:28:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:20:05.742 ************************************ 00:20:05.742 END TEST skip_rpc_with_delay 00:20:05.742 ************************************ 00:20:05.742 11:28:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:20:05.742 11:28:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:20:05.742 11:28:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:20:05.742 11:28:34 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:05.742 11:28:34 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:05.742 11:28:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:05.742 ************************************ 00:20:05.742 START TEST exit_on_failed_rpc_init 00:20:05.742 ************************************ 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2114913 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2114913 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 2114913 ']' 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:05.742 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:05.742 [2024-06-10 11:28:34.601186] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:05.742 [2024-06-10 11:28:34.601236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114913 ] 00:20:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.742 [2024-06-10 11:28:34.661934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.003 [2024-06-10 11:28:34.733014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:20:06.003 11:28:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:20:06.003 [2024-06-10 11:28:34.966775] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:06.003 [2024-06-10 11:28:34.966826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115079 ] 00:20:06.269 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.269 [2024-06-10 11:28:35.024263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.269 [2024-06-10 11:28:35.088403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.269 [2024-06-10 11:28:35.088462] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:06.269 [2024-06-10 11:28:35.088471] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:06.269 [2024-06-10 11:28:35.088477] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2114913 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 2114913 ']' 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 2114913 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2114913 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2114913' 00:20:06.269 killing process with pid 2114913 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 2114913 00:20:06.269 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 2114913 00:20:06.529 00:20:06.529 real 0m0.866s 00:20:06.529 user 0m1.021s 00:20:06.529 sys 0m0.332s 00:20:06.529 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:06.529 11:28:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:06.529 ************************************ 00:20:06.529 END TEST exit_on_failed_rpc_init 00:20:06.529 ************************************ 00:20:06.529 11:28:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:20:06.529 00:20:06.529 real 0m12.720s 00:20:06.529 user 0m12.204s 00:20:06.529 sys 0m1.384s 00:20:06.529 11:28:35 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:06.529 11:28:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:06.529 ************************************ 00:20:06.529 END TEST skip_rpc 00:20:06.529 ************************************ 00:20:06.529 11:28:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:20:06.529 11:28:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:06.529 11:28:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:06.529 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:20:06.791 ************************************ 00:20:06.791 START TEST rpc_client 00:20:06.791 ************************************ 00:20:06.791 11:28:35 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:20:06.791 * Looking for test storage... 00:20:06.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:20:06.791 11:28:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:20:06.791 OK 00:20:06.791 11:28:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:20:06.791 00:20:06.791 real 0m0.125s 00:20:06.791 user 0m0.055s 00:20:06.791 sys 0m0.078s 00:20:06.791 11:28:35 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:06.791 11:28:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:20:06.791 ************************************ 00:20:06.791 END TEST rpc_client 00:20:06.791 ************************************ 00:20:06.791 11:28:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:20:06.791 11:28:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:06.791 11:28:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:06.791 11:28:35 -- common/autotest_common.sh@10 -- # set +x 00:20:06.791 ************************************ 00:20:06.791 START TEST json_config 00:20:06.791 ************************************ 00:20:06.791 11:28:35 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:20:07.052 11:28:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.052 11:28:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.052 11:28:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.052 11:28:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.052 11:28:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.052 11:28:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.052 11:28:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.052 11:28:35 json_config -- paths/export.sh@5 -- # export PATH 00:20:07.052 11:28:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@47 -- # : 0 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.052 11:28:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.053 11:28:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.053 11:28:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:20:07.053 INFO: JSON configuration test init 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.053 11:28:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:20:07.053 11:28:35 json_config -- json_config/common.sh@9 -- # local app=target 00:20:07.053 11:28:35 json_config -- json_config/common.sh@10 -- # shift 00:20:07.053 11:28:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:20:07.053 11:28:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:20:07.053 11:28:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:20:07.053 11:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:07.053 11:28:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:07.053 11:28:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2115360 00:20:07.053 11:28:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:20:07.053 Waiting for target to run... 00:20:07.053 11:28:35 json_config -- json_config/common.sh@25 -- # waitforlisten 2115360 /var/tmp/spdk_tgt.sock 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@830 -- # '[' -z 2115360 ']' 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:07.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:07.053 11:28:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:07.053 11:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.053 [2024-06-10 11:28:35.893716] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:07.053 [2024-06-10 11:28:35.893771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115360 ] 00:20:07.053 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.312 [2024-06-10 11:28:36.180043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.312 [2024-06-10 11:28:36.231623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@863 -- # return 0 00:20:07.908 11:28:36 json_config -- json_config/common.sh@26 -- # echo '' 00:20:07.908 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:07.908 11:28:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:20:07.908 11:28:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:20:07.908 11:28:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:20:08.487 11:28:37 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:08.487 11:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:20:08.487 11:28:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:20:08.487 11:28:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:20:08.747 11:28:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:20:08.747 11:28:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:20:08.748 11:28:37 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:08.748 11:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:20:08.748 11:28:37 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:08.748 11:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:20:08.748 11:28:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:20:08.748 11:28:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:20:09.007 MallocForNvmf0 00:20:09.007 11:28:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:20:09.007 11:28:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:20:09.267 MallocForNvmf1 00:20:09.267 11:28:38 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:20:09.267 11:28:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:20:09.267 [2024-06-10 11:28:38.226180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.527 11:28:38 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.527 11:28:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.527 11:28:38 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:20:09.527 11:28:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:20:09.787 11:28:38 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:20:09.787 11:28:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:20:10.047 11:28:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:20:10.047 11:28:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:20:10.047 [2024-06-10 11:28:38.996579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:10.047 11:28:39 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:20:10.047 11:28:39 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:10.047 11:28:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.307 11:28:39 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:20:10.307 11:28:39 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:10.307 11:28:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.307 11:28:39 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:20:10.307 11:28:39 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:20:10.307 11:28:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:20:10.307 MallocBdevForConfigChangeCheck 00:20:10.567 11:28:39 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:20:10.567 11:28:39 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:10.567 11:28:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:10.567 11:28:39 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:20:10.567 11:28:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:10.827 11:28:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:20:10.827 INFO: shutting down applications... 00:20:10.827 11:28:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:20:10.827 11:28:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:20:10.827 11:28:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:20:10.827 11:28:39 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:20:11.086 Calling clear_iscsi_subsystem 00:20:11.086 Calling clear_nvmf_subsystem 00:20:11.086 Calling clear_nbd_subsystem 00:20:11.086 Calling clear_ublk_subsystem 00:20:11.086 Calling clear_vhost_blk_subsystem 00:20:11.086 Calling clear_vhost_scsi_subsystem 00:20:11.086 Calling clear_bdev_subsystem 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@343 -- # count=100 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:20:11.347 11:28:40 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:20:11.606 11:28:40 json_config -- json_config/json_config.sh@345 -- # break 00:20:11.606 11:28:40 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:20:11.606 11:28:40 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:20:11.606 11:28:40 json_config -- json_config/common.sh@31 -- # local app=target 00:20:11.606 11:28:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:20:11.606 11:28:40 json_config -- json_config/common.sh@35 -- # [[ -n 2115360 ]] 00:20:11.606 11:28:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2115360 00:20:11.606 11:28:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:20:11.606 11:28:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:11.606 11:28:40 json_config -- json_config/common.sh@41 -- # kill -0 2115360 00:20:11.606 11:28:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:20:12.177 11:28:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:20:12.177 11:28:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:12.177 11:28:40 json_config -- json_config/common.sh@41 -- # kill -0 2115360 00:20:12.177 11:28:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:20:12.177 11:28:40 json_config -- json_config/common.sh@43 -- # break 00:20:12.177 11:28:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:20:12.177 11:28:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:20:12.177 SPDK target shutdown done 00:20:12.177 11:28:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:20:12.177 INFO: relaunching applications... 00:20:12.177 11:28:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:12.177 11:28:40 json_config -- json_config/common.sh@9 -- # local app=target 00:20:12.177 11:28:40 json_config -- json_config/common.sh@10 -- # shift 00:20:12.177 11:28:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:20:12.177 11:28:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:20:12.177 11:28:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:20:12.177 11:28:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:12.177 11:28:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:12.177 11:28:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2116495 00:20:12.177 11:28:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:20:12.177 Waiting for target to run... 00:20:12.177 11:28:40 json_config -- json_config/common.sh@25 -- # waitforlisten 2116495 /var/tmp/spdk_tgt.sock 00:20:12.177 11:28:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@830 -- # '[' -z 2116495 ']' 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:12.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:12.177 11:28:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:12.177 [2024-06-10 11:28:40.969771] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:12.177 [2024-06-10 11:28:40.969829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116495 ] 00:20:12.177 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.437 [2024-06-10 11:28:41.246811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.437 [2024-06-10 11:28:41.298659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.007 [2024-06-10 11:28:41.796040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.007 [2024-06-10 11:28:41.828412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:13.007 11:28:41 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:13.007 11:28:41 json_config -- common/autotest_common.sh@863 -- # return 0 00:20:13.007 11:28:41 json_config -- json_config/common.sh@26 -- # echo '' 00:20:13.007 00:20:13.007 11:28:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:20:13.007 11:28:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:20:13.007 INFO: Checking if target configuration is the same... 00:20:13.007 11:28:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:13.008 11:28:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:20:13.008 11:28:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:13.008 + '[' 2 -ne 2 ']' 00:20:13.008 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:20:13.008 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:20:13.008 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:20:13.008 +++ basename /dev/fd/62 00:20:13.008 ++ mktemp /tmp/62.XXX 00:20:13.008 + tmp_file_1=/tmp/62.UBu 00:20:13.008 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:13.008 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:20:13.008 + tmp_file_2=/tmp/spdk_tgt_config.json.JrD 00:20:13.008 + ret=0 00:20:13.008 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:20:13.268 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:20:13.528 + diff -u /tmp/62.UBu /tmp/spdk_tgt_config.json.JrD 00:20:13.528 + echo 'INFO: JSON config files are the same' 00:20:13.528 INFO: JSON config files are the same 00:20:13.528 + rm /tmp/62.UBu /tmp/spdk_tgt_config.json.JrD 00:20:13.528 + exit 0 00:20:13.528 11:28:42 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:20:13.528 11:28:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:20:13.528 INFO: changing configuration and checking if this can be detected... 00:20:13.528 11:28:42 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:20:13.528 11:28:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:20:13.528 11:28:42 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:20:13.528 11:28:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:20:13.528 11:28:42 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:13.528 + '[' 2 -ne 2 ']' 00:20:13.528 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:20:13.528 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:20:13.528 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:20:13.528 +++ basename /dev/fd/62 00:20:13.528 ++ mktemp /tmp/62.XXX 00:20:13.528 + tmp_file_1=/tmp/62.8hJ 00:20:13.528 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:13.528 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:20:13.528 + tmp_file_2=/tmp/spdk_tgt_config.json.v8L 00:20:13.528 + ret=0 00:20:13.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:20:14.097 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:20:14.097 + diff -u /tmp/62.8hJ /tmp/spdk_tgt_config.json.v8L 00:20:14.097 + ret=1 00:20:14.097 + echo '=== Start of file: /tmp/62.8hJ ===' 00:20:14.097 + cat /tmp/62.8hJ 00:20:14.097 + echo '=== End of file: /tmp/62.8hJ ===' 00:20:14.097 + echo '' 00:20:14.097 + echo '=== Start of file: /tmp/spdk_tgt_config.json.v8L ===' 00:20:14.097 + cat /tmp/spdk_tgt_config.json.v8L 00:20:14.097 + echo '=== End of file: /tmp/spdk_tgt_config.json.v8L ===' 00:20:14.097 + echo '' 00:20:14.097 + rm /tmp/62.8hJ /tmp/spdk_tgt_config.json.v8L 00:20:14.097 + exit 1 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:20:14.097 INFO: configuration change detected. 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 2116495 ]] 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.097 11:28:42 json_config -- json_config/json_config.sh@323 -- # killprocess 2116495 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@949 -- # '[' -z 2116495 ']' 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@953 -- # kill -0 2116495 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@954 -- # uname 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2116495 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2116495' 00:20:14.097 killing process with pid 2116495 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@968 -- # kill 2116495 00:20:14.097 11:28:42 json_config -- common/autotest_common.sh@973 -- # wait 2116495 00:20:14.357 11:28:43 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:20:14.357 11:28:43 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:20:14.357 11:28:43 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:14.357 11:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.357 11:28:43 json_config -- json_config/json_config.sh@328 -- # return 0 00:20:14.358 11:28:43 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:20:14.358 INFO: Success 00:20:14.358 00:20:14.358 real 0m7.578s 00:20:14.358 user 0m9.759s 00:20:14.358 sys 0m1.700s 00:20:14.358 11:28:43 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:14.358 11:28:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:14.358 ************************************ 00:20:14.358 END TEST json_config 00:20:14.358 ************************************ 00:20:14.618 11:28:43 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:20:14.618 11:28:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:14.618 11:28:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:14.618 11:28:43 -- common/autotest_common.sh@10 -- # set +x 00:20:14.618 ************************************ 00:20:14.618 START TEST json_config_extra_key 00:20:14.618 ************************************ 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.619 11:28:43 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.619 11:28:43 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.619 11:28:43 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.619 11:28:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.619 11:28:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.619 11:28:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.619 11:28:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:20:14.619 11:28:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.619 11:28:43 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:20:14.619 INFO: launching applications... 00:20:14.619 11:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2117237 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:20:14.619 Waiting for target to run... 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2117237 /var/tmp/spdk_tgt.sock 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 2117237 ']' 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:14.619 11:28:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:14.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:14.619 11:28:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:20:14.619 [2024-06-10 11:28:43.538110] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:14.619 [2024-06-10 11:28:43.538179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117237 ] 00:20:14.619 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.879 [2024-06-10 11:28:43.818632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.138 [2024-06-10 11:28:43.868486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.707 11:28:44 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:15.707 11:28:44 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:20:15.707 00:20:15.707 11:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:20:15.707 INFO: shutting down applications... 00:20:15.707 11:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2117237 ]] 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2117237 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2117237 00:20:15.707 11:28:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2117237 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:20:15.967 11:28:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:20:15.967 SPDK target shutdown done 00:20:15.967 11:28:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:20:15.967 Success 00:20:15.967 00:20:15.967 real 0m1.550s 00:20:15.967 user 0m1.293s 00:20:15.967 sys 0m0.378s 00:20:15.967 11:28:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:15.967 11:28:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:20:15.967 ************************************ 00:20:15.967 END TEST json_config_extra_key 00:20:15.967 ************************************ 00:20:16.228 11:28:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:16.228 11:28:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:16.228 11:28:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:16.228 11:28:44 -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 ************************************ 00:20:16.228 START TEST alias_rpc 00:20:16.228 ************************************ 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:20:16.228 * Looking for test storage... 00:20:16.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:20:16.228 11:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:20:16.228 11:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2117591 00:20:16.228 11:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2117591 00:20:16.228 11:28:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 2117591 ']' 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:16.228 11:28:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:16.228 [2024-06-10 11:28:45.167654] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:16.228 [2024-06-10 11:28:45.167742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117591 ] 00:20:16.228 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.489 [2024-06-10 11:28:45.234079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.489 [2024-06-10 11:28:45.308402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.060 11:28:46 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:17.060 11:28:46 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:17.060 11:28:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:20:17.321 11:28:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2117591 00:20:17.321 11:28:46 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 2117591 ']' 00:20:17.321 11:28:46 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 2117591 00:20:17.321 11:28:46 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:20:17.321 11:28:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:17.321 11:28:46 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2117591 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2117591' 00:20:17.583 killing process with pid 2117591 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@968 -- # kill 2117591 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@973 -- # wait 2117591 00:20:17.583 00:20:17.583 real 0m1.505s 00:20:17.583 user 0m1.764s 00:20:17.583 sys 0m0.372s 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:17.583 11:28:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:17.583 ************************************ 00:20:17.583 END TEST alias_rpc 00:20:17.583 ************************************ 00:20:17.583 11:28:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:20:17.583 11:28:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:20:17.583 11:28:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:17.583 11:28:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:17.583 11:28:46 -- common/autotest_common.sh@10 -- # set +x 00:20:17.845 ************************************ 00:20:17.845 START TEST spdkcli_tcp 00:20:17.845 ************************************ 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:20:17.845 * Looking for test storage... 00:20:17.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2117950 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2117950 00:20:17.845 11:28:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 2117950 ']' 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.845 11:28:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:17.846 11:28:46 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.846 11:28:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:17.846 11:28:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.846 [2024-06-10 11:28:46.742151] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:17.846 [2024-06-10 11:28:46.742204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117950 ] 00:20:17.846 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.846 [2024-06-10 11:28:46.801412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:18.106 [2024-06-10 11:28:46.866616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.106 [2024-06-10 11:28:46.866621] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.678 11:28:47 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.678 11:28:47 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:20:18.678 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2118060 00:20:18.678 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:20:18.678 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:20:18.940 [ 00:20:18.940 "bdev_malloc_delete", 00:20:18.940 "bdev_malloc_create", 00:20:18.940 "bdev_null_resize", 00:20:18.940 "bdev_null_delete", 00:20:18.940 "bdev_null_create", 00:20:18.940 "bdev_nvme_cuse_unregister", 00:20:18.940 "bdev_nvme_cuse_register", 00:20:18.940 "bdev_opal_new_user", 00:20:18.940 "bdev_opal_set_lock_state", 00:20:18.940 "bdev_opal_delete", 00:20:18.940 "bdev_opal_get_info", 00:20:18.940 "bdev_opal_create", 00:20:18.940 "bdev_nvme_opal_revert", 00:20:18.940 "bdev_nvme_opal_init", 00:20:18.940 "bdev_nvme_send_cmd", 00:20:18.940 "bdev_nvme_get_path_iostat", 00:20:18.940 "bdev_nvme_get_mdns_discovery_info", 00:20:18.940 "bdev_nvme_stop_mdns_discovery", 00:20:18.940 "bdev_nvme_start_mdns_discovery", 00:20:18.940 "bdev_nvme_set_multipath_policy", 00:20:18.940 "bdev_nvme_set_preferred_path", 00:20:18.940 "bdev_nvme_get_io_paths", 00:20:18.940 "bdev_nvme_remove_error_injection", 00:20:18.940 "bdev_nvme_add_error_injection", 00:20:18.940 "bdev_nvme_get_discovery_info", 00:20:18.940 "bdev_nvme_stop_discovery", 00:20:18.940 "bdev_nvme_start_discovery", 00:20:18.940 "bdev_nvme_get_controller_health_info", 00:20:18.940 "bdev_nvme_disable_controller", 00:20:18.940 "bdev_nvme_enable_controller", 00:20:18.940 "bdev_nvme_reset_controller", 00:20:18.940 "bdev_nvme_get_transport_statistics", 00:20:18.940 "bdev_nvme_apply_firmware", 00:20:18.940 "bdev_nvme_detach_controller", 00:20:18.940 "bdev_nvme_get_controllers", 00:20:18.940 "bdev_nvme_attach_controller", 00:20:18.940 "bdev_nvme_set_hotplug", 00:20:18.940 "bdev_nvme_set_options", 00:20:18.940 "bdev_passthru_delete", 00:20:18.940 "bdev_passthru_create", 00:20:18.940 "bdev_lvol_set_parent_bdev", 00:20:18.940 "bdev_lvol_set_parent", 00:20:18.940 "bdev_lvol_check_shallow_copy", 00:20:18.940 "bdev_lvol_start_shallow_copy", 00:20:18.940 "bdev_lvol_grow_lvstore", 00:20:18.940 "bdev_lvol_get_lvols", 00:20:18.940 "bdev_lvol_get_lvstores", 00:20:18.940 "bdev_lvol_delete", 00:20:18.940 "bdev_lvol_set_read_only", 00:20:18.940 "bdev_lvol_resize", 00:20:18.940 "bdev_lvol_decouple_parent", 00:20:18.940 "bdev_lvol_inflate", 00:20:18.940 "bdev_lvol_rename", 00:20:18.940 "bdev_lvol_clone_bdev", 00:20:18.940 "bdev_lvol_clone", 00:20:18.940 "bdev_lvol_snapshot", 00:20:18.940 "bdev_lvol_create", 00:20:18.940 "bdev_lvol_delete_lvstore", 00:20:18.940 "bdev_lvol_rename_lvstore", 00:20:18.940 "bdev_lvol_create_lvstore", 00:20:18.940 "bdev_raid_set_options", 00:20:18.940 "bdev_raid_remove_base_bdev", 00:20:18.940 "bdev_raid_add_base_bdev", 00:20:18.940 "bdev_raid_delete", 00:20:18.940 "bdev_raid_create", 00:20:18.940 "bdev_raid_get_bdevs", 00:20:18.940 "bdev_error_inject_error", 00:20:18.940 "bdev_error_delete", 00:20:18.940 "bdev_error_create", 00:20:18.940 "bdev_split_delete", 00:20:18.940 "bdev_split_create", 00:20:18.940 "bdev_delay_delete", 00:20:18.940 "bdev_delay_create", 00:20:18.940 "bdev_delay_update_latency", 00:20:18.940 "bdev_zone_block_delete", 00:20:18.940 "bdev_zone_block_create", 00:20:18.940 "blobfs_create", 00:20:18.940 "blobfs_detect", 00:20:18.940 "blobfs_set_cache_size", 00:20:18.940 "bdev_aio_delete", 00:20:18.940 "bdev_aio_rescan", 00:20:18.940 "bdev_aio_create", 00:20:18.940 "bdev_ftl_set_property", 00:20:18.940 "bdev_ftl_get_properties", 00:20:18.940 "bdev_ftl_get_stats", 00:20:18.940 "bdev_ftl_unmap", 00:20:18.940 "bdev_ftl_unload", 00:20:18.940 "bdev_ftl_delete", 00:20:18.940 "bdev_ftl_load", 00:20:18.940 "bdev_ftl_create", 00:20:18.940 "bdev_virtio_attach_controller", 00:20:18.940 "bdev_virtio_scsi_get_devices", 00:20:18.940 "bdev_virtio_detach_controller", 00:20:18.940 "bdev_virtio_blk_set_hotplug", 00:20:18.940 "bdev_iscsi_delete", 00:20:18.940 "bdev_iscsi_create", 00:20:18.940 "bdev_iscsi_set_options", 00:20:18.940 "accel_error_inject_error", 00:20:18.940 "ioat_scan_accel_module", 00:20:18.940 "dsa_scan_accel_module", 00:20:18.940 "iaa_scan_accel_module", 00:20:18.940 "vfu_virtio_create_scsi_endpoint", 00:20:18.940 "vfu_virtio_scsi_remove_target", 00:20:18.940 "vfu_virtio_scsi_add_target", 00:20:18.940 "vfu_virtio_create_blk_endpoint", 00:20:18.940 "vfu_virtio_delete_endpoint", 00:20:18.940 "keyring_file_remove_key", 00:20:18.940 "keyring_file_add_key", 00:20:18.940 "keyring_linux_set_options", 00:20:18.940 "iscsi_get_histogram", 00:20:18.940 "iscsi_enable_histogram", 00:20:18.940 "iscsi_set_options", 00:20:18.940 "iscsi_get_auth_groups", 00:20:18.940 "iscsi_auth_group_remove_secret", 00:20:18.940 "iscsi_auth_group_add_secret", 00:20:18.940 "iscsi_delete_auth_group", 00:20:18.940 "iscsi_create_auth_group", 00:20:18.940 "iscsi_set_discovery_auth", 00:20:18.940 "iscsi_get_options", 00:20:18.940 "iscsi_target_node_request_logout", 00:20:18.940 "iscsi_target_node_set_redirect", 00:20:18.940 "iscsi_target_node_set_auth", 00:20:18.940 "iscsi_target_node_add_lun", 00:20:18.940 "iscsi_get_stats", 00:20:18.940 "iscsi_get_connections", 00:20:18.940 "iscsi_portal_group_set_auth", 00:20:18.940 "iscsi_start_portal_group", 00:20:18.940 "iscsi_delete_portal_group", 00:20:18.940 "iscsi_create_portal_group", 00:20:18.940 "iscsi_get_portal_groups", 00:20:18.940 "iscsi_delete_target_node", 00:20:18.940 "iscsi_target_node_remove_pg_ig_maps", 00:20:18.940 "iscsi_target_node_add_pg_ig_maps", 00:20:18.940 "iscsi_create_target_node", 00:20:18.940 "iscsi_get_target_nodes", 00:20:18.940 "iscsi_delete_initiator_group", 00:20:18.940 "iscsi_initiator_group_remove_initiators", 00:20:18.940 "iscsi_initiator_group_add_initiators", 00:20:18.940 "iscsi_create_initiator_group", 00:20:18.940 "iscsi_get_initiator_groups", 00:20:18.940 "nvmf_set_crdt", 00:20:18.940 "nvmf_set_config", 00:20:18.940 "nvmf_set_max_subsystems", 00:20:18.940 "nvmf_stop_mdns_prr", 00:20:18.940 "nvmf_publish_mdns_prr", 00:20:18.940 "nvmf_subsystem_get_listeners", 00:20:18.940 "nvmf_subsystem_get_qpairs", 00:20:18.940 "nvmf_subsystem_get_controllers", 00:20:18.940 "nvmf_get_stats", 00:20:18.941 "nvmf_get_transports", 00:20:18.941 "nvmf_create_transport", 00:20:18.941 "nvmf_get_targets", 00:20:18.941 "nvmf_delete_target", 00:20:18.941 "nvmf_create_target", 00:20:18.941 "nvmf_subsystem_allow_any_host", 00:20:18.941 "nvmf_subsystem_remove_host", 00:20:18.941 "nvmf_subsystem_add_host", 00:20:18.941 "nvmf_ns_remove_host", 00:20:18.941 "nvmf_ns_add_host", 00:20:18.941 "nvmf_subsystem_remove_ns", 00:20:18.941 "nvmf_subsystem_add_ns", 00:20:18.941 "nvmf_subsystem_listener_set_ana_state", 00:20:18.941 "nvmf_discovery_get_referrals", 00:20:18.941 "nvmf_discovery_remove_referral", 00:20:18.941 "nvmf_discovery_add_referral", 00:20:18.941 "nvmf_subsystem_remove_listener", 00:20:18.941 "nvmf_subsystem_add_listener", 00:20:18.941 "nvmf_delete_subsystem", 00:20:18.941 "nvmf_create_subsystem", 00:20:18.941 "nvmf_get_subsystems", 00:20:18.941 "env_dpdk_get_mem_stats", 00:20:18.941 "nbd_get_disks", 00:20:18.941 "nbd_stop_disk", 00:20:18.941 "nbd_start_disk", 00:20:18.941 "ublk_recover_disk", 00:20:18.941 "ublk_get_disks", 00:20:18.941 "ublk_stop_disk", 00:20:18.941 "ublk_start_disk", 00:20:18.941 "ublk_destroy_target", 00:20:18.941 "ublk_create_target", 00:20:18.941 "virtio_blk_create_transport", 00:20:18.941 "virtio_blk_get_transports", 00:20:18.941 "vhost_controller_set_coalescing", 00:20:18.941 "vhost_get_controllers", 00:20:18.941 "vhost_delete_controller", 00:20:18.941 "vhost_create_blk_controller", 00:20:18.941 "vhost_scsi_controller_remove_target", 00:20:18.941 "vhost_scsi_controller_add_target", 00:20:18.941 "vhost_start_scsi_controller", 00:20:18.941 "vhost_create_scsi_controller", 00:20:18.941 "thread_set_cpumask", 00:20:18.941 "framework_get_scheduler", 00:20:18.941 "framework_set_scheduler", 00:20:18.941 "framework_get_reactors", 00:20:18.941 "thread_get_io_channels", 00:20:18.941 "thread_get_pollers", 00:20:18.941 "thread_get_stats", 00:20:18.941 "framework_monitor_context_switch", 00:20:18.941 "spdk_kill_instance", 00:20:18.941 "log_enable_timestamps", 00:20:18.941 "log_get_flags", 00:20:18.941 "log_clear_flag", 00:20:18.941 "log_set_flag", 00:20:18.941 "log_get_level", 00:20:18.941 "log_set_level", 00:20:18.941 "log_get_print_level", 00:20:18.941 "log_set_print_level", 00:20:18.941 "framework_enable_cpumask_locks", 00:20:18.941 "framework_disable_cpumask_locks", 00:20:18.941 "framework_wait_init", 00:20:18.941 "framework_start_init", 00:20:18.941 "scsi_get_devices", 00:20:18.941 "bdev_get_histogram", 00:20:18.941 "bdev_enable_histogram", 00:20:18.941 "bdev_set_qos_limit", 00:20:18.941 "bdev_set_qd_sampling_period", 00:20:18.941 "bdev_get_bdevs", 00:20:18.941 "bdev_reset_iostat", 00:20:18.941 "bdev_get_iostat", 00:20:18.941 "bdev_examine", 00:20:18.941 "bdev_wait_for_examine", 00:20:18.941 "bdev_set_options", 00:20:18.941 "notify_get_notifications", 00:20:18.941 "notify_get_types", 00:20:18.941 "accel_get_stats", 00:20:18.941 "accel_set_options", 00:20:18.941 "accel_set_driver", 00:20:18.941 "accel_crypto_key_destroy", 00:20:18.941 "accel_crypto_keys_get", 00:20:18.941 "accel_crypto_key_create", 00:20:18.941 "accel_assign_opc", 00:20:18.941 "accel_get_module_info", 00:20:18.941 "accel_get_opc_assignments", 00:20:18.941 "vmd_rescan", 00:20:18.941 "vmd_remove_device", 00:20:18.941 "vmd_enable", 00:20:18.941 "sock_get_default_impl", 00:20:18.941 "sock_set_default_impl", 00:20:18.941 "sock_impl_set_options", 00:20:18.941 "sock_impl_get_options", 00:20:18.941 "iobuf_get_stats", 00:20:18.941 "iobuf_set_options", 00:20:18.941 "keyring_get_keys", 00:20:18.941 "framework_get_pci_devices", 00:20:18.941 "framework_get_config", 00:20:18.941 "framework_get_subsystems", 00:20:18.941 "vfu_tgt_set_base_path", 00:20:18.941 "trace_get_info", 00:20:18.941 "trace_get_tpoint_group_mask", 00:20:18.941 "trace_disable_tpoint_group", 00:20:18.941 "trace_enable_tpoint_group", 00:20:18.941 "trace_clear_tpoint_mask", 00:20:18.941 "trace_set_tpoint_mask", 00:20:18.941 "spdk_get_version", 00:20:18.941 "rpc_get_methods" 00:20:18.941 ] 00:20:18.941 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.941 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:18.941 11:28:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2117950 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 2117950 ']' 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 2117950 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2117950 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2117950' 00:20:18.941 killing process with pid 2117950 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 2117950 00:20:18.941 11:28:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 2117950 00:20:19.203 00:20:19.203 real 0m1.519s 00:20:19.203 user 0m2.972s 00:20:19.203 sys 0m0.391s 00:20:19.203 11:28:48 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:19.203 11:28:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.203 ************************************ 00:20:19.203 END TEST spdkcli_tcp 00:20:19.203 ************************************ 00:20:19.203 11:28:48 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:20:19.203 11:28:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:19.203 11:28:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:19.203 11:28:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.203 ************************************ 00:20:19.203 START TEST dpdk_mem_utility 00:20:19.203 ************************************ 00:20:19.464 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:20:19.464 * Looking for test storage... 00:20:19.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:20:19.464 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:20:19.465 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:20:19.465 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2118338 00:20:19.465 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2118338 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 2118338 ']' 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:19.465 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:20:19.465 [2024-06-10 11:28:48.329346] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:19.465 [2024-06-10 11:28:48.329414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118338 ] 00:20:19.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.465 [2024-06-10 11:28:48.395051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.726 [2024-06-10 11:28:48.472055] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.726 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:19.726 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:20:19.726 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:20:19.726 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:20:19.726 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.726 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:20:19.726 { 00:20:19.726 "filename": "/tmp/spdk_mem_dump.txt" 00:20:19.726 } 00:20:19.726 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.726 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:20:19.726 DPDK memory size 814.000000 MiB in 1 heap(s) 00:20:19.726 1 heaps totaling size 814.000000 MiB 00:20:19.726 size: 814.000000 MiB heap id: 0 00:20:19.726 end heaps---------- 00:20:19.726 8 mempools totaling size 598.116089 MiB 00:20:19.726 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:20:19.726 size: 158.602051 MiB name: PDU_data_out_Pool 00:20:19.726 size: 84.521057 MiB name: bdev_io_2118338 00:20:19.726 size: 51.011292 MiB name: evtpool_2118338 00:20:19.726 size: 50.003479 MiB name: msgpool_2118338 00:20:19.726 size: 21.763794 MiB name: PDU_Pool 00:20:19.726 size: 19.513306 MiB name: SCSI_TASK_Pool 00:20:19.726 size: 0.026123 MiB name: Session_Pool 00:20:19.726 end mempools------- 00:20:19.726 6 memzones totaling size 4.142822 MiB 00:20:19.726 size: 1.000366 MiB name: RG_ring_0_2118338 00:20:19.726 size: 1.000366 MiB name: RG_ring_1_2118338 00:20:19.726 size: 1.000366 MiB name: RG_ring_4_2118338 00:20:19.726 size: 1.000366 MiB name: RG_ring_5_2118338 00:20:19.726 size: 0.125366 MiB name: RG_ring_2_2118338 00:20:19.726 size: 0.015991 MiB name: RG_ring_3_2118338 00:20:19.726 end memzones------- 00:20:19.987 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:20:19.987 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:20:19.987 list of free elements. size: 12.519348 MiB 00:20:19.987 element at address: 0x200000400000 with size: 1.999512 MiB 00:20:19.987 element at address: 0x200018e00000 with size: 0.999878 MiB 00:20:19.987 element at address: 0x200019000000 with size: 0.999878 MiB 00:20:19.987 element at address: 0x200003e00000 with size: 0.996277 MiB 00:20:19.987 element at address: 0x200031c00000 with size: 0.994446 MiB 00:20:19.987 element at address: 0x200013800000 with size: 0.978699 MiB 00:20:19.987 element at address: 0x200007000000 with size: 0.959839 MiB 00:20:19.987 element at address: 0x200019200000 with size: 0.936584 MiB 00:20:19.987 element at address: 0x200000200000 with size: 0.841614 MiB 00:20:19.987 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:20:19.987 element at address: 0x20000b200000 with size: 0.490723 MiB 00:20:19.987 element at address: 0x200000800000 with size: 0.487793 MiB 00:20:19.987 element at address: 0x200019400000 with size: 0.485657 MiB 00:20:19.987 element at address: 0x200027e00000 with size: 0.410034 MiB 00:20:19.987 element at address: 0x200003a00000 with size: 0.355530 MiB 00:20:19.987 list of standard malloc elements. size: 199.218079 MiB 00:20:19.987 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:20:19.987 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:20:19.987 element at address: 0x200018efff80 with size: 1.000122 MiB 00:20:19.987 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:20:19.987 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:20:19.987 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:20:19.987 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:20:19.987 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:20:19.987 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:20:19.987 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003adb300 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003adb500 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003affa80 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003affb40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:20:19.987 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:20:19.987 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200027e69040 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:20:19.987 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:20:19.987 list of memzone associated elements. size: 602.262573 MiB 00:20:19.987 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:20:19.987 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:20:19.987 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:20:19.987 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:20:19.988 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:20:19.988 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2118338_0 00:20:19.988 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:20:19.988 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2118338_0 00:20:19.988 element at address: 0x200003fff380 with size: 48.003052 MiB 00:20:19.988 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2118338_0 00:20:19.988 element at address: 0x2000195be940 with size: 20.255554 MiB 00:20:19.988 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:20:19.988 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:20:19.988 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:20:19.988 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:20:19.988 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2118338 00:20:19.988 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:20:19.988 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2118338 00:20:19.988 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:20:19.988 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2118338 00:20:19.988 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:20:19.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:20:19.988 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:20:19.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:20:19.988 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:20:19.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:20:19.988 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:20:19.988 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:20:19.988 element at address: 0x200003eff180 with size: 1.000488 MiB 00:20:19.988 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2118338 00:20:19.988 element at address: 0x200003affc00 with size: 1.000488 MiB 00:20:19.988 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2118338 00:20:19.988 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:20:19.988 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2118338 00:20:19.988 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:20:19.988 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2118338 00:20:19.988 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:20:19.988 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2118338 00:20:19.988 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:20:19.988 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:20:19.988 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:20:19.988 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:20:19.988 element at address: 0x20001947c540 with size: 0.250488 MiB 00:20:19.988 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:20:19.988 element at address: 0x200003adf880 with size: 0.125488 MiB 00:20:19.988 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2118338 00:20:19.988 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:20:19.988 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:20:19.988 element at address: 0x200027e69100 with size: 0.023743 MiB 00:20:19.988 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:20:19.988 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:20:19.988 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2118338 00:20:19.988 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:20:19.988 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:20:19.988 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:20:19.988 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2118338 00:20:19.988 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:20:19.988 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2118338 00:20:19.988 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:20:19.988 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:20:19.988 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:20:19.988 11:28:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2118338 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 2118338 ']' 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 2118338 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2118338 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2118338' 00:20:19.988 killing process with pid 2118338 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 2118338 00:20:19.988 11:28:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 2118338 00:20:20.249 00:20:20.249 real 0m0.835s 00:20:20.249 user 0m0.808s 00:20:20.249 sys 0m0.364s 00:20:20.249 11:28:49 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:20.249 11:28:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 ************************************ 00:20:20.249 END TEST dpdk_mem_utility 00:20:20.249 ************************************ 00:20:20.249 11:28:49 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:20:20.249 11:28:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:20.249 11:28:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:20.249 11:28:49 -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 ************************************ 00:20:20.249 START TEST event 00:20:20.249 ************************************ 00:20:20.249 11:28:49 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:20:20.249 * Looking for test storage... 00:20:20.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:20:20.249 11:28:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:20:20.249 11:28:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:20:20.249 11:28:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:20.249 11:28:49 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:20:20.249 11:28:49 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:20.249 11:28:49 event -- common/autotest_common.sh@10 -- # set +x 00:20:20.249 ************************************ 00:20:20.249 START TEST event_perf 00:20:20.249 ************************************ 00:20:20.249 11:28:49 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:20.509 Running I/O for 1 seconds...[2024-06-10 11:28:49.225266] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:20.509 [2024-06-10 11:28:49.225361] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118517 ] 00:20:20.509 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.509 [2024-06-10 11:28:49.290114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.509 [2024-06-10 11:28:49.356509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.509 [2024-06-10 11:28:49.356625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.509 [2024-06-10 11:28:49.356783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.509 Running I/O for 1 seconds...[2024-06-10 11:28:49.356783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.452 00:20:21.452 lcore 0: 180718 00:20:21.452 lcore 1: 180718 00:20:21.452 lcore 2: 180713 00:20:21.452 lcore 3: 180716 00:20:21.452 done. 00:20:21.452 00:20:21.452 real 0m1.209s 00:20:21.452 user 0m4.132s 00:20:21.452 sys 0m0.073s 00:20:21.452 11:28:50 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:21.452 11:28:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:20:21.452 ************************************ 00:20:21.452 END TEST event_perf 00:20:21.452 ************************************ 00:20:21.713 11:28:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:20:21.713 11:28:50 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:21.713 11:28:50 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:21.713 11:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:20:21.713 ************************************ 00:20:21.713 START TEST event_reactor 00:20:21.713 ************************************ 00:20:21.713 11:28:50 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:20:21.713 [2024-06-10 11:28:50.504028] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:21.713 [2024-06-10 11:28:50.504127] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118869 ] 00:20:21.713 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.713 [2024-06-10 11:28:50.566781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.713 [2024-06-10 11:28:50.630678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.099 test_start 00:20:23.099 oneshot 00:20:23.099 tick 100 00:20:23.099 tick 100 00:20:23.099 tick 250 00:20:23.099 tick 100 00:20:23.099 tick 100 00:20:23.099 tick 100 00:20:23.099 tick 250 00:20:23.099 tick 500 00:20:23.099 tick 100 00:20:23.099 tick 100 00:20:23.099 tick 250 00:20:23.099 tick 100 00:20:23.099 tick 100 00:20:23.099 test_end 00:20:23.099 00:20:23.099 real 0m1.200s 00:20:23.099 user 0m1.123s 00:20:23.099 sys 0m0.073s 00:20:23.099 11:28:51 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:23.099 11:28:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:20:23.099 ************************************ 00:20:23.099 END TEST event_reactor 00:20:23.099 ************************************ 00:20:23.099 11:28:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:23.099 11:28:51 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:20:23.099 11:28:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:23.099 11:28:51 event -- common/autotest_common.sh@10 -- # set +x 00:20:23.099 ************************************ 00:20:23.099 START TEST event_reactor_perf 00:20:23.099 ************************************ 00:20:23.099 11:28:51 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:23.099 [2024-06-10 11:28:51.778538] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:23.099 [2024-06-10 11:28:51.778630] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119184 ] 00:20:23.099 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.099 [2024-06-10 11:28:51.842478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.099 [2024-06-10 11:28:51.909701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.040 test_start 00:20:24.040 test_end 00:20:24.040 Performance: 371676 events per second 00:20:24.040 00:20:24.040 real 0m1.203s 00:20:24.040 user 0m1.128s 00:20:24.040 sys 0m0.070s 00:20:24.040 11:28:52 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:24.040 11:28:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.040 ************************************ 00:20:24.040 END TEST event_reactor_perf 00:20:24.040 ************************************ 00:20:24.040 11:28:52 event -- event/event.sh@49 -- # uname -s 00:20:24.040 11:28:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:20:24.040 11:28:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:20:24.040 11:28:53 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:24.040 11:28:53 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:24.040 11:28:53 event -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 ************************************ 00:20:24.302 START TEST event_scheduler 00:20:24.302 ************************************ 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:20:24.302 * Looking for test storage... 00:20:24.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:20:24.302 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:20:24.302 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2119416 00:20:24.302 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:20:24.302 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2119416 00:20:24.302 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 2119416 ']' 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:24.302 11:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 [2024-06-10 11:28:53.195042] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:24.302 [2024-06-10 11:28:53.195109] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119416 ] 00:20:24.302 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.302 [2024-06-10 11:28:53.251463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.579 [2024-06-10 11:28:53.317948] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.579 [2024-06-10 11:28:53.318072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.579 [2024-06-10 11:28:53.318228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.579 [2024-06-10 11:28:53.318229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:20:24.579 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 POWER: Env isn't set yet! 00:20:24.579 POWER: Attempting to initialise ACPI cpufreq power management... 00:20:24.579 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:24.579 POWER: Cannot set governor of lcore 0 to userspace 00:20:24.579 POWER: Attempting to initialise PSTAT power management... 00:20:24.579 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:20:24.579 POWER: Initialized successfully for lcore 0 power management 00:20:24.579 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:20:24.579 POWER: Initialized successfully for lcore 1 power management 00:20:24.579 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:20:24.579 POWER: Initialized successfully for lcore 2 power management 00:20:24.579 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:20:24.579 POWER: Initialized successfully for lcore 3 power management 00:20:24.579 [2024-06-10 11:28:53.402027] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:20:24.579 [2024-06-10 11:28:53.402040] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:20:24.579 [2024-06-10 11:28:53.402046] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.579 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 [2024-06-10 11:28:53.459107] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.579 11:28:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 ************************************ 00:20:24.579 START TEST scheduler_create_thread 00:20:24.579 ************************************ 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 2 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 3 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.579 4 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.579 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.841 5 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.841 6 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.841 7 00:20:24.841 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:24.842 8 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.842 11:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:26.229 9 00:20:26.229 11:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.229 11:28:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:20:26.229 11:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.229 11:28:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:26.802 10 00:20:26.802 11:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.802 11:28:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:20:26.802 11:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.802 11:28:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:27.744 11:28:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.744 11:28:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:20:27.744 11:28:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:20:27.744 11:28:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.744 11:28:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:28.315 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.315 11:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:20:28.315 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.315 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:28.888 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.888 11:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:20:28.888 11:28:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:20:28.888 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.888 11:28:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:29.461 11:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.461 00:20:29.461 real 0m4.724s 00:20:29.461 user 0m0.025s 00:20:29.461 sys 0m0.006s 00:20:29.461 11:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:29.461 11:28:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:29.461 ************************************ 00:20:29.461 END TEST scheduler_create_thread 00:20:29.461 ************************************ 00:20:29.461 11:28:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:29.461 11:28:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2119416 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 2119416 ']' 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 2119416 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2119416 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2119416' 00:20:29.461 killing process with pid 2119416 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 2119416 00:20:29.461 11:28:58 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 2119416 00:20:29.461 [2024-06-10 11:28:58.322491] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:20:29.461 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:20:29.461 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:20:29.461 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:20:29.461 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:20:29.461 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:20:29.461 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:20:29.461 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:20:29.461 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:20:29.722 00:20:29.722 real 0m5.433s 00:20:29.722 user 0m12.278s 00:20:29.722 sys 0m0.352s 00:20:29.722 11:28:58 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:29.722 11:28:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:29.722 ************************************ 00:20:29.722 END TEST event_scheduler 00:20:29.722 ************************************ 00:20:29.722 11:28:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:20:29.722 11:28:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:20:29.722 11:28:58 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:29.722 11:28:58 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:29.722 11:28:58 event -- common/autotest_common.sh@10 -- # set +x 00:20:29.722 ************************************ 00:20:29.722 START TEST app_repeat 00:20:29.722 ************************************ 00:20:29.722 11:28:58 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2120637 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2120637' 00:20:29.722 Process app_repeat pid: 2120637 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:20:29.722 spdk_app_start Round 0 00:20:29.722 11:28:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2120637 /var/tmp/spdk-nbd.sock 00:20:29.722 11:28:58 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2120637 ']' 00:20:29.722 11:28:58 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:29.722 11:28:58 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:29.723 11:28:58 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:29.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:29.723 11:28:58 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:29.723 11:28:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:29.723 11:28:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:20:29.723 [2024-06-10 11:28:58.588696] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:29.723 [2024-06-10 11:28:58.588762] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120637 ] 00:20:29.723 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.723 [2024-06-10 11:28:58.651206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.984 [2024-06-10 11:28:58.721591] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.984 [2024-06-10 11:28:58.721597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.562 11:28:59 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:30.562 11:28:59 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:20:30.562 11:28:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:30.895 Malloc0 00:20:30.895 11:28:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:31.158 Malloc1 00:20:31.158 11:28:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.158 11:28:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:31.158 /dev/nbd0 00:20:31.158 11:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:31.158 11:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:31.158 1+0 records in 00:20:31.158 1+0 records out 00:20:31.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298614 s, 13.7 MB/s 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:31.158 11:29:00 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:31.158 11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.158 11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.158 11:29:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:31.419 /dev/nbd1 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:31.419 1+0 records in 00:20:31.419 1+0 records out 00:20:31.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236784 s, 17.3 MB/s 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:31.419 11:29:00 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.419 11:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:31.680 { 00:20:31.680 "nbd_device": "/dev/nbd0", 00:20:31.680 "bdev_name": "Malloc0" 00:20:31.680 }, 00:20:31.680 { 00:20:31.680 "nbd_device": "/dev/nbd1", 00:20:31.680 "bdev_name": "Malloc1" 00:20:31.680 } 00:20:31.680 ]' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:31.680 { 00:20:31.680 "nbd_device": "/dev/nbd0", 00:20:31.680 "bdev_name": "Malloc0" 00:20:31.680 }, 00:20:31.680 { 00:20:31.680 "nbd_device": "/dev/nbd1", 00:20:31.680 "bdev_name": "Malloc1" 00:20:31.680 } 00:20:31.680 ]' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:31.680 /dev/nbd1' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:31.680 /dev/nbd1' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:31.680 256+0 records in 00:20:31.680 256+0 records out 00:20:31.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117209 s, 89.5 MB/s 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:31.680 256+0 records in 00:20:31.680 256+0 records out 00:20:31.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015762 s, 66.5 MB/s 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:31.680 11:29:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:31.680 256+0 records in 00:20:31.680 256+0 records out 00:20:31.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018188 s, 57.7 MB/s 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.941 11:29:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.202 11:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:32.462 11:29:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:32.462 11:29:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:32.723 11:29:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:32.723 [2024-06-10 11:29:01.630872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:32.984 [2024-06-10 11:29:01.695037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.984 [2024-06-10 11:29:01.695042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.984 [2024-06-10 11:29:01.726422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:32.984 [2024-06-10 11:29:01.726455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:36.281 11:29:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:36.281 11:29:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:20:36.281 spdk_app_start Round 1 00:20:36.281 11:29:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2120637 /var/tmp/spdk-nbd.sock 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2120637 ']' 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:36.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:36.281 11:29:04 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:20:36.281 11:29:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:36.281 Malloc0 00:20:36.281 11:29:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:36.281 Malloc1 00:20:36.281 11:29:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:36.281 11:29:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:36.282 11:29:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:36.282 11:29:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:36.282 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:36.282 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:36.282 11:29:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:36.542 /dev/nbd0 00:20:36.542 11:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:36.542 11:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:36.542 1+0 records in 00:20:36.542 1+0 records out 00:20:36.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247412 s, 16.6 MB/s 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:36.542 11:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:36.542 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:36.542 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:36.542 11:29:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:36.803 /dev/nbd1 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:36.803 1+0 records in 00:20:36.803 1+0 records out 00:20:36.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264443 s, 15.5 MB/s 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:36.803 11:29:05 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:36.803 11:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:37.064 { 00:20:37.064 "nbd_device": "/dev/nbd0", 00:20:37.064 "bdev_name": "Malloc0" 00:20:37.064 }, 00:20:37.064 { 00:20:37.064 "nbd_device": "/dev/nbd1", 00:20:37.064 "bdev_name": "Malloc1" 00:20:37.064 } 00:20:37.064 ]' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:37.064 { 00:20:37.064 "nbd_device": "/dev/nbd0", 00:20:37.064 "bdev_name": "Malloc0" 00:20:37.064 }, 00:20:37.064 { 00:20:37.064 "nbd_device": "/dev/nbd1", 00:20:37.064 "bdev_name": "Malloc1" 00:20:37.064 } 00:20:37.064 ]' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:37.064 /dev/nbd1' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:37.064 /dev/nbd1' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:37.064 256+0 records in 00:20:37.064 256+0 records out 00:20:37.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012461 s, 84.1 MB/s 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:37.064 256+0 records in 00:20:37.064 256+0 records out 00:20:37.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231921 s, 45.2 MB/s 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:37.064 256+0 records in 00:20:37.064 256+0 records out 00:20:37.064 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199985 s, 52.4 MB/s 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:37.064 11:29:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.065 11:29:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.325 11:29:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:37.585 11:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:37.846 11:29:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:37.846 11:29:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:37.846 11:29:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:38.106 [2024-06-10 11:29:06.940593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:38.106 [2024-06-10 11:29:07.004588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.106 [2024-06-10 11:29:07.004595] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.106 [2024-06-10 11:29:07.036826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:38.106 [2024-06-10 11:29:07.036862] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:41.406 11:29:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:41.406 11:29:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:20:41.406 spdk_app_start Round 2 00:20:41.406 11:29:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2120637 /var/tmp/spdk-nbd.sock 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2120637 ']' 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:41.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:41.406 11:29:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:41.406 11:29:10 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:41.406 11:29:10 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:20:41.406 11:29:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:41.406 Malloc0 00:20:41.406 11:29:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:41.666 Malloc1 00:20:41.666 11:29:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.666 11:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:41.927 /dev/nbd0 00:20:41.927 11:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:41.927 11:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:41.927 1+0 records in 00:20:41.927 1+0 records out 00:20:41.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027068 s, 15.1 MB/s 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:41.927 11:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:41.927 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.927 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:41.927 11:29:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:41.927 /dev/nbd1 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:42.187 1+0 records in 00:20:42.187 1+0 records out 00:20:42.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031262 s, 13.1 MB/s 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:20:42.187 11:29:10 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.187 11:29:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:42.187 11:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:42.187 { 00:20:42.187 "nbd_device": "/dev/nbd0", 00:20:42.187 "bdev_name": "Malloc0" 00:20:42.187 }, 00:20:42.187 { 00:20:42.187 "nbd_device": "/dev/nbd1", 00:20:42.187 "bdev_name": "Malloc1" 00:20:42.187 } 00:20:42.187 ]' 00:20:42.187 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:42.187 { 00:20:42.187 "nbd_device": "/dev/nbd0", 00:20:42.187 "bdev_name": "Malloc0" 00:20:42.187 }, 00:20:42.187 { 00:20:42.187 "nbd_device": "/dev/nbd1", 00:20:42.187 "bdev_name": "Malloc1" 00:20:42.187 } 00:20:42.187 ]' 00:20:42.187 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:42.447 /dev/nbd1' 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:42.447 /dev/nbd1' 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:42.447 256+0 records in 00:20:42.447 256+0 records out 00:20:42.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115797 s, 90.6 MB/s 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:42.447 11:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:42.447 256+0 records in 00:20:42.448 256+0 records out 00:20:42.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159542 s, 65.7 MB/s 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:42.448 256+0 records in 00:20:42.448 256+0 records out 00:20:42.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0433233 s, 24.2 MB/s 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:42.448 11:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:42.708 11:29:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:42.967 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:43.227 11:29:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:43.227 11:29:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:43.227 11:29:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:43.487 [2024-06-10 11:29:12.312014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:43.487 [2024-06-10 11:29:12.375712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.487 [2024-06-10 11:29:12.375719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.487 [2024-06-10 11:29:12.407113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:43.487 [2024-06-10 11:29:12.407146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:46.789 11:29:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2120637 /var/tmp/spdk-nbd.sock 00:20:46.789 11:29:15 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 2120637 ']' 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:46.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:20:46.790 11:29:15 event.app_repeat -- event/event.sh@39 -- # killprocess 2120637 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 2120637 ']' 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 2120637 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2120637 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2120637' 00:20:46.790 killing process with pid 2120637 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@968 -- # kill 2120637 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@973 -- # wait 2120637 00:20:46.790 spdk_app_start is called in Round 0. 00:20:46.790 Shutdown signal received, stop current app iteration 00:20:46.790 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:20:46.790 spdk_app_start is called in Round 1. 00:20:46.790 Shutdown signal received, stop current app iteration 00:20:46.790 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:20:46.790 spdk_app_start is called in Round 2. 00:20:46.790 Shutdown signal received, stop current app iteration 00:20:46.790 Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 reinitialization... 00:20:46.790 spdk_app_start is called in Round 3. 00:20:46.790 Shutdown signal received, stop current app iteration 00:20:46.790 11:29:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:20:46.790 11:29:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:20:46.790 00:20:46.790 real 0m17.019s 00:20:46.790 user 0m37.635s 00:20:46.790 sys 0m2.454s 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:46.790 11:29:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:46.790 ************************************ 00:20:46.790 END TEST app_repeat 00:20:46.790 ************************************ 00:20:46.790 11:29:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:20:46.790 11:29:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:20:46.790 11:29:15 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:46.790 11:29:15 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:46.790 11:29:15 event -- common/autotest_common.sh@10 -- # set +x 00:20:46.790 ************************************ 00:20:46.790 START TEST cpu_locks 00:20:46.790 ************************************ 00:20:46.790 11:29:15 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:20:46.790 * Looking for test storage... 00:20:46.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:20:46.790 11:29:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:20:46.790 11:29:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:20:46.790 11:29:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:20:46.790 11:29:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:20:46.790 11:29:15 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:46.790 11:29:15 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:46.790 11:29:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:47.050 ************************************ 00:20:47.050 START TEST default_locks 00:20:47.050 ************************************ 00:20:47.050 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:20:47.050 11:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2124252 00:20:47.050 11:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2124252 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2124252 ']' 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.051 11:29:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:47.051 [2024-06-10 11:29:15.839618] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:47.051 [2024-06-10 11:29:15.839675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124252 ] 00:20:47.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.051 [2024-06-10 11:29:15.899408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.051 [2024-06-10 11:29:15.967206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.311 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:47.311 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:20:47.311 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2124252 00:20:47.311 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2124252 00:20:47.311 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:47.882 lslocks: write error 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 2124252 ']' 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2124252' 00:20:47.882 killing process with pid 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2124252 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 2124252 ']' 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:47.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2124252) - No such process 00:20:47.882 ERROR: process (pid: 2124252) is no longer running 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:47.882 00:20:47.882 real 0m1.054s 00:20:47.882 user 0m1.079s 00:20:47.882 sys 0m0.498s 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:47.882 11:29:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:47.882 ************************************ 00:20:47.882 END TEST default_locks 00:20:47.882 ************************************ 00:20:48.143 11:29:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:20:48.143 11:29:16 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:48.143 11:29:16 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:48.143 11:29:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:48.143 ************************************ 00:20:48.143 START TEST default_locks_via_rpc 00:20:48.143 ************************************ 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2124473 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2124473 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2124473 ']' 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:48.143 11:29:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.143 [2024-06-10 11:29:16.954544] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:48.143 [2024-06-10 11:29:16.954594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124473 ] 00:20:48.143 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.143 [2024-06-10 11:29:17.013581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.143 [2024-06-10 11:29:17.079388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.403 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:48.403 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2124473 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2124473 00:20:48.404 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2124473 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 2124473 ']' 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 2124473 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2124473 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2124473' 00:20:48.974 killing process with pid 2124473 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 2124473 00:20:48.974 11:29:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 2124473 00:20:49.234 00:20:49.234 real 0m1.112s 00:20:49.234 user 0m1.143s 00:20:49.234 sys 0m0.507s 00:20:49.234 11:29:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:49.234 11:29:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.234 ************************************ 00:20:49.234 END TEST default_locks_via_rpc 00:20:49.234 ************************************ 00:20:49.234 11:29:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:20:49.234 11:29:18 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:49.234 11:29:18 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:49.234 11:29:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:49.234 ************************************ 00:20:49.234 START TEST non_locking_app_on_locked_coremask 00:20:49.234 ************************************ 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2124656 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2124656 /var/tmp/spdk.sock 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2124656 ']' 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:49.234 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:49.234 [2024-06-10 11:29:18.137983] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:49.234 [2024-06-10 11:29:18.138032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124656 ] 00:20:49.234 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.234 [2024-06-10 11:29:18.197349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.495 [2024-06-10 11:29:18.262889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2124760 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2124760 /var/tmp/spdk2.sock 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2124760 ']' 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:49.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:49.495 11:29:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 [2024-06-10 11:29:18.504297] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:49.755 [2024-06-10 11:29:18.504365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124760 ] 00:20:49.755 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.755 [2024-06-10 11:29:18.591602] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:49.755 [2024-06-10 11:29:18.591631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.755 [2024-06-10 11:29:18.721082] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.696 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:50.696 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:50.696 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2124656 00:20:50.696 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2124656 00:20:50.696 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:50.956 lslocks: write error 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2124656 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2124656 ']' 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2124656 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:50.956 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2124656 00:20:51.216 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:51.216 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:51.216 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2124656' 00:20:51.216 killing process with pid 2124656 00:20:51.216 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2124656 00:20:51.216 11:29:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2124656 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2124760 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2124760 ']' 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2124760 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2124760 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2124760' 00:20:51.476 killing process with pid 2124760 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2124760 00:20:51.476 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2124760 00:20:51.736 00:20:51.736 real 0m2.546s 00:20:51.736 user 0m2.847s 00:20:51.736 sys 0m0.859s 00:20:51.736 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:51.736 11:29:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:51.736 ************************************ 00:20:51.736 END TEST non_locking_app_on_locked_coremask 00:20:51.736 ************************************ 00:20:51.736 11:29:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:20:51.736 11:29:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:51.736 11:29:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:51.736 11:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:51.736 ************************************ 00:20:51.736 START TEST locking_app_on_unlocked_coremask 00:20:51.736 ************************************ 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2125357 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2125357 /var/tmp/spdk.sock 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2125357 ']' 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:51.736 11:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:51.996 [2024-06-10 11:29:20.754643] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:51.996 [2024-06-10 11:29:20.754695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125357 ] 00:20:51.996 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.997 [2024-06-10 11:29:20.813622] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:51.997 [2024-06-10 11:29:20.813657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.997 [2024-06-10 11:29:20.876856] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2125360 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2125360 /var/tmp/spdk2.sock 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2125360 ']' 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:52.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:52.258 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:52.258 [2024-06-10 11:29:21.087977] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:52.258 [2024-06-10 11:29:21.088023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125360 ] 00:20:52.258 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.258 [2024-06-10 11:29:21.177918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.521 [2024-06-10 11:29:21.307529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.094 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:53.094 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:53.094 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2125360 00:20:53.094 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2125360 00:20:53.094 11:29:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:53.666 lslocks: write error 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2125357 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2125357 ']' 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2125357 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2125357 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2125357' 00:20:53.666 killing process with pid 2125357 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2125357 00:20:53.666 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2125357 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2125360 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2125360 ']' 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 2125360 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:54.239 11:29:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2125360 00:20:54.239 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:54.239 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:54.239 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2125360' 00:20:54.239 killing process with pid 2125360 00:20:54.239 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 2125360 00:20:54.239 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 2125360 00:20:54.500 00:20:54.500 real 0m2.555s 00:20:54.500 user 0m2.832s 00:20:54.500 sys 0m0.860s 00:20:54.500 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:54.501 ************************************ 00:20:54.501 END TEST locking_app_on_unlocked_coremask 00:20:54.501 ************************************ 00:20:54.501 11:29:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:20:54.501 11:29:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:54.501 11:29:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:54.501 11:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:54.501 ************************************ 00:20:54.501 START TEST locking_app_on_locked_coremask 00:20:54.501 ************************************ 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2125754 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2125754 /var/tmp/spdk.sock 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2125754 ']' 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:54.501 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:20:54.501 [2024-06-10 11:29:23.380057] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:54.501 [2024-06-10 11:29:23.380106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125754 ] 00:20:54.501 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.501 [2024-06-10 11:29:23.440723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.762 [2024-06-10 11:29:23.505441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2125934 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2125934 /var/tmp/spdk2.sock 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2125934 /var/tmp/spdk2.sock 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2125934 /var/tmp/spdk2.sock 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 2125934 ']' 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:54.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:54.762 11:29:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:55.023 [2024-06-10 11:29:23.737154] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:55.023 [2024-06-10 11:29:23.737205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125934 ] 00:20:55.023 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.023 [2024-06-10 11:29:23.822712] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2125754 has claimed it. 00:20:55.023 [2024-06-10 11:29:23.822751] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:55.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2125934) - No such process 00:20:55.595 ERROR: process (pid: 2125934) is no longer running 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2125754 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:55.595 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2125754 00:20:55.856 lslocks: write error 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2125754 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 2125754 ']' 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 2125754 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.856 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2125754 00:20:56.117 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:56.117 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:56.117 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2125754' 00:20:56.117 killing process with pid 2125754 00:20:56.117 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 2125754 00:20:56.117 11:29:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 2125754 00:20:56.117 00:20:56.117 real 0m1.740s 00:20:56.117 user 0m1.949s 00:20:56.117 sys 0m0.567s 00:20:56.117 11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:56.117 11:29:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:56.117 ************************************ 00:20:56.117 END TEST locking_app_on_locked_coremask 00:20:56.117 ************************************ 00:20:56.391 11:29:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:20:56.391 11:29:25 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:56.391 11:29:25 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:56.391 11:29:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:56.391 ************************************ 00:20:56.391 START TEST locking_overlapped_coremask 00:20:56.391 ************************************ 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2126182 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2126182 /var/tmp/spdk.sock 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2126182 ']' 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:56.391 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:20:56.391 [2024-06-10 11:29:25.191005] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:56.391 [2024-06-10 11:29:25.191057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126182 ] 00:20:56.391 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.391 [2024-06-10 11:29:25.251385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:56.391 [2024-06-10 11:29:25.321530] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.391 [2024-06-10 11:29:25.321652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.391 [2024-06-10 11:29:25.321656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2126365 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2126365 /var/tmp/spdk2.sock 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2126365 /var/tmp/spdk2.sock 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2126365 /var/tmp/spdk2.sock 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 2126365 ']' 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:56.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:56.707 11:29:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:56.707 [2024-06-10 11:29:25.548387] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:56.707 [2024-06-10 11:29:25.548435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126365 ] 00:20:56.707 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.707 [2024-06-10 11:29:25.621370] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2126182 has claimed it. 00:20:56.707 [2024-06-10 11:29:25.621402] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:20:57.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (2126365) - No such process 00:20:57.293 ERROR: process (pid: 2126365) is no longer running 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2126182 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 2126182 ']' 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 2126182 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:20:57.293 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2126182 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2126182' 00:20:57.294 killing process with pid 2126182 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 2126182 00:20:57.294 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 2126182 00:20:57.554 00:20:57.554 real 0m1.338s 00:20:57.554 user 0m3.708s 00:20:57.554 sys 0m0.331s 00:20:57.554 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:57.554 11:29:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:57.554 ************************************ 00:20:57.554 END TEST locking_overlapped_coremask 00:20:57.554 ************************************ 00:20:57.554 11:29:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:20:57.554 11:29:26 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:20:57.554 11:29:26 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:57.554 11:29:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:57.815 ************************************ 00:20:57.815 START TEST locking_overlapped_coremask_via_rpc 00:20:57.815 ************************************ 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2126477 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2126477 /var/tmp/spdk.sock 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2126477 ']' 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:57.815 11:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.815 [2024-06-10 11:29:26.609726] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:57.815 [2024-06-10 11:29:26.609771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126477 ] 00:20:57.815 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.815 [2024-06-10 11:29:26.671186] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:57.815 [2024-06-10 11:29:26.671218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:57.815 [2024-06-10 11:29:26.736776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.815 [2024-06-10 11:29:26.737118] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.815 [2024-06-10 11:29:26.737122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2126813 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2126813 /var/tmp/spdk2.sock 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2126813 ']' 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:58.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:58.758 11:29:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:58.758 [2024-06-10 11:29:27.521921] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:20:58.758 [2024-06-10 11:29:27.521976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126813 ] 00:20:58.758 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.758 [2024-06-10 11:29:27.594196] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:58.758 [2024-06-10 11:29:27.594223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:58.758 [2024-06-10 11:29:27.704467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.758 [2024-06-10 11:29:27.704628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.758 [2024-06-10 11:29:27.704631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 [2024-06-10 11:29:28.400733] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2126477 has claimed it. 00:20:59.700 request: 00:20:59.700 { 00:20:59.700 "method": "framework_enable_cpumask_locks", 00:20:59.700 "req_id": 1 00:20:59.700 } 00:20:59.700 Got JSON-RPC error response 00:20:59.700 response: 00:20:59.700 { 00:20:59.700 "code": -32603, 00:20:59.700 "message": "Failed to claim CPU core: 2" 00:20:59.700 } 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2126477 /var/tmp/spdk.sock 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2126477 ']' 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2126813 /var/tmp/spdk2.sock 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 2126813 ']' 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:59.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:59.700 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.961 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:59.962 00:20:59.962 real 0m2.292s 00:20:59.962 user 0m1.026s 00:20:59.962 sys 0m0.188s 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:59.962 11:29:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.962 ************************************ 00:20:59.962 END TEST locking_overlapped_coremask_via_rpc 00:20:59.962 ************************************ 00:20:59.962 11:29:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:20:59.962 11:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2126477 ]] 00:20:59.962 11:29:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2126477 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2126477 ']' 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2126477 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2126477 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2126477' 00:20:59.962 killing process with pid 2126477 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2126477 00:20:59.962 11:29:28 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2126477 00:21:00.222 11:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2126813 ]] 00:21:00.222 11:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2126813 00:21:00.222 11:29:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2126813 ']' 00:21:00.222 11:29:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2126813 00:21:00.222 11:29:29 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:21:00.222 11:29:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.222 11:29:29 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2126813 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2126813' 00:21:00.483 killing process with pid 2126813 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 2126813 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 2126813 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2126477 ]] 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2126477 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2126477 ']' 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2126477 00:21:00.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2126477) - No such process 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2126477 is not found' 00:21:00.483 Process with pid 2126477 is not found 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2126813 ]] 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2126813 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 2126813 ']' 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 2126813 00:21:00.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2126813) - No such process 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 2126813 is not found' 00:21:00.483 Process with pid 2126813 is not found 00:21:00.483 11:29:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:21:00.483 00:21:00.483 real 0m13.756s 00:21:00.483 user 0m25.376s 00:21:00.483 sys 0m4.661s 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:00.483 11:29:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:00.483 ************************************ 00:21:00.483 END TEST cpu_locks 00:21:00.483 ************************************ 00:21:00.483 00:21:00.483 real 0m40.368s 00:21:00.483 user 1m21.877s 00:21:00.483 sys 0m8.054s 00:21:00.483 11:29:29 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:00.483 11:29:29 event -- common/autotest_common.sh@10 -- # set +x 00:21:00.483 ************************************ 00:21:00.483 END TEST event 00:21:00.483 ************************************ 00:21:00.745 11:29:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:21:00.745 11:29:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:00.745 11:29:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:00.745 11:29:29 -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 ************************************ 00:21:00.745 START TEST thread 00:21:00.745 ************************************ 00:21:00.745 11:29:29 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:21:00.745 * Looking for test storage... 00:21:00.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:21:00.745 11:29:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:00.745 11:29:29 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:21:00.745 11:29:29 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:00.745 11:29:29 thread -- common/autotest_common.sh@10 -- # set +x 00:21:00.745 ************************************ 00:21:00.745 START TEST thread_poller_perf 00:21:00.745 ************************************ 00:21:00.745 11:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:00.745 [2024-06-10 11:29:29.672392] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:00.745 [2024-06-10 11:29:29.672491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127252 ] 00:21:00.745 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.006 [2024-06-10 11:29:29.741085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.006 [2024-06-10 11:29:29.815768] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.006 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:21:01.947 ====================================== 00:21:01.947 busy:2410417854 (cyc) 00:21:01.947 total_run_count: 287000 00:21:01.947 tsc_hz: 2400000000 (cyc) 00:21:01.947 ====================================== 00:21:01.947 poller_cost: 8398 (cyc), 3499 (nsec) 00:21:01.947 00:21:01.947 real 0m1.229s 00:21:01.947 user 0m1.146s 00:21:01.947 sys 0m0.077s 00:21:01.947 11:29:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:01.947 11:29:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:21:01.947 ************************************ 00:21:01.947 END TEST thread_poller_perf 00:21:01.947 ************************************ 00:21:01.947 11:29:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:01.947 11:29:30 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:21:01.947 11:29:30 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:01.947 11:29:30 thread -- common/autotest_common.sh@10 -- # set +x 00:21:02.208 ************************************ 00:21:02.208 START TEST thread_poller_perf 00:21:02.208 ************************************ 00:21:02.208 11:29:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:02.208 [2024-06-10 11:29:30.974826] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:02.208 [2024-06-10 11:29:30.974924] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127605 ] 00:21:02.208 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.208 [2024-06-10 11:29:31.037258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.208 [2024-06-10 11:29:31.100838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.208 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:21:03.592 ====================================== 00:21:03.592 busy:2401821468 (cyc) 00:21:03.592 total_run_count: 3814000 00:21:03.592 tsc_hz: 2400000000 (cyc) 00:21:03.592 ====================================== 00:21:03.592 poller_cost: 629 (cyc), 262 (nsec) 00:21:03.592 00:21:03.592 real 0m1.203s 00:21:03.592 user 0m1.128s 00:21:03.592 sys 0m0.071s 00:21:03.592 11:29:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:03.592 11:29:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:21:03.592 ************************************ 00:21:03.592 END TEST thread_poller_perf 00:21:03.592 ************************************ 00:21:03.592 11:29:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:21:03.592 00:21:03.592 real 0m2.680s 00:21:03.592 user 0m2.364s 00:21:03.592 sys 0m0.322s 00:21:03.592 11:29:32 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:03.592 11:29:32 thread -- common/autotest_common.sh@10 -- # set +x 00:21:03.592 ************************************ 00:21:03.592 END TEST thread 00:21:03.592 ************************************ 00:21:03.592 11:29:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:21:03.592 11:29:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:03.592 11:29:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:03.592 11:29:32 -- common/autotest_common.sh@10 -- # set +x 00:21:03.592 ************************************ 00:21:03.592 START TEST accel 00:21:03.592 ************************************ 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:21:03.592 * Looking for test storage... 00:21:03.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:21:03.592 11:29:32 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:21:03.592 11:29:32 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:21:03.592 11:29:32 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:03.592 11:29:32 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2127983 00:21:03.592 11:29:32 accel -- accel/accel.sh@63 -- # waitforlisten 2127983 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@830 -- # '[' -z 2127983 ']' 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.592 11:29:32 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:03.592 11:29:32 accel -- common/autotest_common.sh@10 -- # set +x 00:21:03.592 11:29:32 accel -- accel/accel.sh@61 -- # build_accel_config 00:21:03.592 11:29:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:03.592 11:29:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:03.592 11:29:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:03.592 11:29:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:03.592 11:29:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:03.592 11:29:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:21:03.592 11:29:32 accel -- accel/accel.sh@41 -- # jq -r . 00:21:03.592 [2024-06-10 11:29:32.435182] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:03.592 [2024-06-10 11:29:32.435251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127983 ] 00:21:03.592 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.592 [2024-06-10 11:29:32.500625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.854 [2024-06-10 11:29:32.574623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.426 11:29:33 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:04.426 11:29:33 accel -- common/autotest_common.sh@863 -- # return 0 00:21:04.426 11:29:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:21:04.426 11:29:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:21:04.426 11:29:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:21:04.426 11:29:33 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:21:04.427 11:29:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:21:04.427 11:29:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:21:04.427 11:29:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@10 -- # set +x 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # IFS== 00:21:04.427 11:29:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:21:04.427 11:29:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:21:04.427 11:29:33 accel -- accel/accel.sh@75 -- # killprocess 2127983 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@949 -- # '[' -z 2127983 ']' 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@953 -- # kill -0 2127983 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@954 -- # uname 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:04.427 11:29:33 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2127983 00:21:04.687 11:29:33 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2127983' 00:21:04.688 killing process with pid 2127983 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@968 -- # kill 2127983 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@973 -- # wait 2127983 00:21:04.688 11:29:33 accel -- accel/accel.sh@76 -- # trap - ERR 00:21:04.688 11:29:33 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:04.688 11:29:33 accel -- common/autotest_common.sh@10 -- # set +x 00:21:04.688 11:29:33 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:21:04.688 11:29:33 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:21:04.949 11:29:33 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:04.949 11:29:33 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:21:04.949 11:29:33 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:21:04.949 11:29:33 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:04.949 11:29:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:04.949 11:29:33 accel -- common/autotest_common.sh@10 -- # set +x 00:21:04.949 ************************************ 00:21:04.949 START TEST accel_missing_filename 00:21:04.949 ************************************ 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:04.949 11:29:33 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:21:04.949 11:29:33 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:21:04.949 [2024-06-10 11:29:33.788985] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:04.949 [2024-06-10 11:29:33.789088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128204 ] 00:21:04.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.949 [2024-06-10 11:29:33.858154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.211 [2024-06-10 11:29:33.933013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.211 [2024-06-10 11:29:33.965388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:05.211 [2024-06-10 11:29:34.002435] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:21:05.211 A filename is required. 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.211 00:21:05.211 real 0m0.299s 00:21:05.211 user 0m0.235s 00:21:05.211 sys 0m0.107s 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:05.211 11:29:34 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:21:05.211 ************************************ 00:21:05.211 END TEST accel_missing_filename 00:21:05.211 ************************************ 00:21:05.211 11:29:34 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:05.211 11:29:34 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:21:05.211 11:29:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:05.211 11:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:21:05.211 ************************************ 00:21:05.211 START TEST accel_compress_verify 00:21:05.211 ************************************ 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.211 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:21:05.211 11:29:34 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:21:05.211 [2024-06-10 11:29:34.160035] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:05.211 [2024-06-10 11:29:34.160101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128395 ] 00:21:05.472 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.472 [2024-06-10 11:29:34.220955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.472 [2024-06-10 11:29:34.284834] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.472 [2024-06-10 11:29:34.316598] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:05.472 [2024-06-10 11:29:34.353443] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:21:05.472 00:21:05.472 Compression does not support the verify option, aborting. 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.473 00:21:05.473 real 0m0.277s 00:21:05.473 user 0m0.220s 00:21:05.473 sys 0m0.099s 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:05.473 11:29:34 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:21:05.473 ************************************ 00:21:05.473 END TEST accel_compress_verify 00:21:05.473 ************************************ 00:21:05.473 11:29:34 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:21:05.473 11:29:34 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:05.473 11:29:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:05.473 11:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:21:05.734 ************************************ 00:21:05.734 START TEST accel_wrong_workload 00:21:05.734 ************************************ 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:21:05.734 11:29:34 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:21:05.734 Unsupported workload type: foobar 00:21:05.734 [2024-06-10 11:29:34.509500] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:21:05.734 accel_perf options: 00:21:05.734 [-h help message] 00:21:05.734 [-q queue depth per core] 00:21:05.734 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:05.734 [-T number of threads per core 00:21:05.734 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:05.734 [-t time in seconds] 00:21:05.734 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:05.734 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:21:05.734 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:05.734 [-l for compress/decompress workloads, name of uncompressed input file 00:21:05.734 [-S for crc32c workload, use this seed value (default 0) 00:21:05.734 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:05.734 [-f for fill workload, use this BYTE value (default 255) 00:21:05.734 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:05.734 [-y verify result if this switch is on] 00:21:05.734 [-a tasks to allocate per core (default: same value as -q)] 00:21:05.734 Can be used to spread operations across a wider range of memory. 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.734 00:21:05.734 real 0m0.036s 00:21:05.734 user 0m0.019s 00:21:05.734 sys 0m0.017s 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:05.734 11:29:34 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:21:05.734 ************************************ 00:21:05.734 END TEST accel_wrong_workload 00:21:05.734 ************************************ 00:21:05.734 Error: writing output failed: Broken pipe 00:21:05.734 11:29:34 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:21:05.734 11:29:34 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:21:05.734 11:29:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:05.734 11:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:21:05.734 ************************************ 00:21:05.734 START TEST accel_negative_buffers 00:21:05.734 ************************************ 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:05.734 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:21:05.734 11:29:34 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:21:05.734 -x option must be non-negative. 00:21:05.734 [2024-06-10 11:29:34.619783] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:21:05.734 accel_perf options: 00:21:05.734 [-h help message] 00:21:05.734 [-q queue depth per core] 00:21:05.734 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:21:05.734 [-T number of threads per core 00:21:05.734 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:21:05.734 [-t time in seconds] 00:21:05.734 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:21:05.734 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:21:05.734 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:21:05.734 [-l for compress/decompress workloads, name of uncompressed input file 00:21:05.734 [-S for crc32c workload, use this seed value (default 0) 00:21:05.735 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:21:05.735 [-f for fill workload, use this BYTE value (default 255) 00:21:05.735 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:21:05.735 [-y verify result if this switch is on] 00:21:05.735 [-a tasks to allocate per core (default: same value as -q)] 00:21:05.735 Can be used to spread operations across a wider range of memory. 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:05.735 00:21:05.735 real 0m0.037s 00:21:05.735 user 0m0.025s 00:21:05.735 sys 0m0.012s 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:05.735 11:29:34 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:21:05.735 ************************************ 00:21:05.735 END TEST accel_negative_buffers 00:21:05.735 ************************************ 00:21:05.735 Error: writing output failed: Broken pipe 00:21:05.735 11:29:34 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:21:05.735 11:29:34 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:21:05.735 11:29:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:05.735 11:29:34 accel -- common/autotest_common.sh@10 -- # set +x 00:21:05.735 ************************************ 00:21:05.735 START TEST accel_crc32c 00:21:05.735 ************************************ 00:21:05.735 11:29:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:05.735 11:29:34 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:21:05.996 [2024-06-10 11:29:34.727835] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:05.996 [2024-06-10 11:29:34.727924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128457 ] 00:21:05.996 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.996 [2024-06-10 11:29:34.788711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.996 [2024-06-10 11:29:34.852471] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:05.996 11:29:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:21:07.384 11:29:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:07.384 00:21:07.384 real 0m1.283s 00:21:07.384 user 0m1.191s 00:21:07.384 sys 0m0.102s 00:21:07.384 11:29:35 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:07.384 11:29:35 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:21:07.384 ************************************ 00:21:07.384 END TEST accel_crc32c 00:21:07.384 ************************************ 00:21:07.384 11:29:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:21:07.384 11:29:36 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:21:07.384 11:29:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:07.384 11:29:36 accel -- common/autotest_common.sh@10 -- # set +x 00:21:07.384 ************************************ 00:21:07.384 START TEST accel_crc32c_C2 00:21:07.384 ************************************ 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:21:07.384 [2024-06-10 11:29:36.085785] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:07.384 [2024-06-10 11:29:36.085848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2128810 ] 00:21:07.384 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.384 [2024-06-10 11:29:36.146347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.384 [2024-06-10 11:29:36.211134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.384 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:07.385 11:29:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:08.772 00:21:08.772 real 0m1.283s 00:21:08.772 user 0m1.190s 00:21:08.772 sys 0m0.104s 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:08.772 11:29:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.772 ************************************ 00:21:08.772 END TEST accel_crc32c_C2 00:21:08.772 ************************************ 00:21:08.773 11:29:37 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:21:08.773 11:29:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:08.773 11:29:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:08.773 11:29:37 accel -- common/autotest_common.sh@10 -- # set +x 00:21:08.773 ************************************ 00:21:08.773 START TEST accel_copy 00:21:08.773 ************************************ 00:21:08.773 11:29:37 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:21:08.773 [2024-06-10 11:29:37.445388] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:08.773 [2024-06-10 11:29:37.445451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129125 ] 00:21:08.773 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.773 [2024-06-10 11:29:37.505457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.773 [2024-06-10 11:29:37.569733] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:08.773 11:29:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.159 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:21:10.160 11:29:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:10.160 00:21:10.160 real 0m1.282s 00:21:10.160 user 0m1.196s 00:21:10.160 sys 0m0.098s 00:21:10.160 11:29:38 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:10.160 11:29:38 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:21:10.160 ************************************ 00:21:10.160 END TEST accel_copy 00:21:10.160 ************************************ 00:21:10.160 11:29:38 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:10.160 11:29:38 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:21:10.160 11:29:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:10.160 11:29:38 accel -- common/autotest_common.sh@10 -- # set +x 00:21:10.160 ************************************ 00:21:10.160 START TEST accel_fill 00:21:10.160 ************************************ 00:21:10.160 11:29:38 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:21:10.160 [2024-06-10 11:29:38.806530] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:10.160 [2024-06-10 11:29:38.806621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129316 ] 00:21:10.160 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.160 [2024-06-10 11:29:38.867349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.160 [2024-06-10 11:29:38.931631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:10.160 11:29:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:21:11.101 11:29:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:11.101 00:21:11.101 real 0m1.286s 00:21:11.101 user 0m1.200s 00:21:11.101 sys 0m0.097s 00:21:11.101 11:29:40 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:11.101 11:29:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:21:11.101 ************************************ 00:21:11.101 END TEST accel_fill 00:21:11.101 ************************************ 00:21:11.362 11:29:40 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:21:11.362 11:29:40 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:11.362 11:29:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:11.362 11:29:40 accel -- common/autotest_common.sh@10 -- # set +x 00:21:11.362 ************************************ 00:21:11.362 START TEST accel_copy_crc32c 00:21:11.362 ************************************ 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:21:11.362 [2024-06-10 11:29:40.163778] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:11.362 [2024-06-10 11:29:40.163841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129551 ] 00:21:11.362 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.362 [2024-06-10 11:29:40.225889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.362 [2024-06-10 11:29:40.296490] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.362 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:11.623 11:29:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:12.564 00:21:12.564 real 0m1.289s 00:21:12.564 user 0m1.198s 00:21:12.564 sys 0m0.104s 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:12.564 11:29:41 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:21:12.564 ************************************ 00:21:12.564 END TEST accel_copy_crc32c 00:21:12.564 ************************************ 00:21:12.564 11:29:41 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:21:12.564 11:29:41 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:21:12.564 11:29:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:12.564 11:29:41 accel -- common/autotest_common.sh@10 -- # set +x 00:21:12.564 ************************************ 00:21:12.564 START TEST accel_copy_crc32c_C2 00:21:12.564 ************************************ 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:21:12.564 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:21:12.564 [2024-06-10 11:29:41.532156] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:12.564 [2024-06-10 11:29:41.532248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129898 ] 00:21:12.826 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.826 [2024-06-10 11:29:41.593636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.826 [2024-06-10 11:29:41.659057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.826 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:12.827 11:29:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:14.210 00:21:14.210 real 0m1.285s 00:21:14.210 user 0m1.196s 00:21:14.210 sys 0m0.100s 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:14.210 11:29:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.210 ************************************ 00:21:14.210 END TEST accel_copy_crc32c_C2 00:21:14.210 ************************************ 00:21:14.210 11:29:42 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:21:14.210 11:29:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:14.210 11:29:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:14.210 11:29:42 accel -- common/autotest_common.sh@10 -- # set +x 00:21:14.210 ************************************ 00:21:14.210 START TEST accel_dualcast 00:21:14.210 ************************************ 00:21:14.210 11:29:42 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:21:14.210 11:29:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:21:14.210 [2024-06-10 11:29:42.889454] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:14.210 [2024-06-10 11:29:42.889544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130247 ] 00:21:14.210 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.210 [2024-06-10 11:29:42.951292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.210 [2024-06-10 11:29:43.014299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:14.210 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:14.211 11:29:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:21:15.596 11:29:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:15.596 00:21:15.596 real 0m1.284s 00:21:15.596 user 0m1.198s 00:21:15.596 sys 0m0.097s 00:21:15.596 11:29:44 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:15.596 11:29:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:21:15.596 ************************************ 00:21:15.596 END TEST accel_dualcast 00:21:15.596 ************************************ 00:21:15.596 11:29:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:21:15.596 11:29:44 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:15.596 11:29:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:15.596 11:29:44 accel -- common/autotest_common.sh@10 -- # set +x 00:21:15.596 ************************************ 00:21:15.596 START TEST accel_compare 00:21:15.596 ************************************ 00:21:15.596 11:29:44 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:21:15.596 [2024-06-10 11:29:44.249553] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:15.596 [2024-06-10 11:29:44.249613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130603 ] 00:21:15.596 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.596 [2024-06-10 11:29:44.309590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.596 [2024-06-10 11:29:44.373877] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.596 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:15.597 11:29:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:21:16.537 11:29:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:21:16.538 11:29:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:16.538 11:29:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:21:16.538 11:29:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:16.538 00:21:16.538 real 0m1.282s 00:21:16.538 user 0m1.193s 00:21:16.538 sys 0m0.100s 00:21:16.538 11:29:45 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:16.538 11:29:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:21:16.538 ************************************ 00:21:16.538 END TEST accel_compare 00:21:16.538 ************************************ 00:21:16.798 11:29:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:21:16.798 11:29:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:21:16.798 11:29:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:16.798 11:29:45 accel -- common/autotest_common.sh@10 -- # set +x 00:21:16.798 ************************************ 00:21:16.798 START TEST accel_xor 00:21:16.798 ************************************ 00:21:16.798 11:29:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:21:16.798 11:29:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:21:16.798 [2024-06-10 11:29:45.609037] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:16.798 [2024-06-10 11:29:45.609128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130808 ] 00:21:16.798 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.798 [2024-06-10 11:29:45.668922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.799 [2024-06-10 11:29:45.732451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:16.799 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:17.078 11:29:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:18.040 00:21:18.040 real 0m1.281s 00:21:18.040 user 0m1.191s 00:21:18.040 sys 0m0.102s 00:21:18.040 11:29:46 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:18.040 11:29:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:21:18.040 ************************************ 00:21:18.040 END TEST accel_xor 00:21:18.040 ************************************ 00:21:18.040 11:29:46 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:21:18.040 11:29:46 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:21:18.040 11:29:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:18.040 11:29:46 accel -- common/autotest_common.sh@10 -- # set +x 00:21:18.040 ************************************ 00:21:18.040 START TEST accel_xor 00:21:18.040 ************************************ 00:21:18.040 11:29:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:21:18.040 11:29:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:21:18.040 [2024-06-10 11:29:46.967660] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:18.041 [2024-06-10 11:29:46.967876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131006 ] 00:21:18.041 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.302 [2024-06-10 11:29:47.028700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.302 [2024-06-10 11:29:47.094407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.302 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:18.303 11:29:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.686 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:21:19.687 11:29:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:19.687 00:21:19.687 real 0m1.285s 00:21:19.687 user 0m1.189s 00:21:19.687 sys 0m0.107s 00:21:19.687 11:29:48 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:19.687 11:29:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:21:19.687 ************************************ 00:21:19.687 END TEST accel_xor 00:21:19.687 ************************************ 00:21:19.687 11:29:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:21:19.687 11:29:48 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:21:19.687 11:29:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:19.687 11:29:48 accel -- common/autotest_common.sh@10 -- # set +x 00:21:19.687 ************************************ 00:21:19.687 START TEST accel_dif_verify 00:21:19.687 ************************************ 00:21:19.687 11:29:48 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:21:19.687 [2024-06-10 11:29:48.329428] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:19.687 [2024-06-10 11:29:48.329520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131341 ] 00:21:19.687 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.687 [2024-06-10 11:29:48.390834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.687 [2024-06-10 11:29:48.456612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:19.687 11:29:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:21:20.628 11:29:49 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:20.628 00:21:20.628 real 0m1.286s 00:21:20.628 user 0m1.194s 00:21:20.628 sys 0m0.104s 00:21:20.628 11:29:49 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:20.628 11:29:49 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:21:20.628 ************************************ 00:21:20.628 END TEST accel_dif_verify 00:21:20.628 ************************************ 00:21:20.888 11:29:49 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:21:20.889 11:29:49 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:21:20.889 11:29:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:20.889 11:29:49 accel -- common/autotest_common.sh@10 -- # set +x 00:21:20.889 ************************************ 00:21:20.889 START TEST accel_dif_generate 00:21:20.889 ************************************ 00:21:20.889 11:29:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:21:20.889 [2024-06-10 11:29:49.691543] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:20.889 [2024-06-10 11:29:49.691636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131693 ] 00:21:20.889 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.889 [2024-06-10 11:29:49.761540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.889 [2024-06-10 11:29:49.827232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:20.889 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:21.149 11:29:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.089 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:21:22.090 11:29:50 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.090 00:21:22.090 real 0m1.294s 00:21:22.090 user 0m1.201s 00:21:22.090 sys 0m0.105s 00:21:22.090 11:29:50 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:22.090 11:29:50 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:21:22.090 ************************************ 00:21:22.090 END TEST accel_dif_generate 00:21:22.090 ************************************ 00:21:22.090 11:29:50 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:21:22.090 11:29:50 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:21:22.090 11:29:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:22.090 11:29:50 accel -- common/autotest_common.sh@10 -- # set +x 00:21:22.090 ************************************ 00:21:22.090 START TEST accel_dif_generate_copy 00:21:22.090 ************************************ 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:21:22.090 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:21:22.350 [2024-06-10 11:29:51.062084] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:22.350 [2024-06-10 11:29:51.062171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132040 ] 00:21:22.350 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.350 [2024-06-10 11:29:51.125010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.350 [2024-06-10 11:29:51.193350] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.350 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:22.351 11:29:51 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.732 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.732 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.732 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.732 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.732 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:23.733 00:21:23.733 real 0m1.289s 00:21:23.733 user 0m1.205s 00:21:23.733 sys 0m0.095s 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:23.733 11:29:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:21:23.733 ************************************ 00:21:23.733 END TEST accel_dif_generate_copy 00:21:23.733 ************************************ 00:21:23.733 11:29:52 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:21:23.733 11:29:52 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:23.733 11:29:52 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:21:23.733 11:29:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:23.733 11:29:52 accel -- common/autotest_common.sh@10 -- # set +x 00:21:23.733 ************************************ 00:21:23.733 START TEST accel_comp 00:21:23.733 ************************************ 00:21:23.733 11:29:52 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:21:23.733 [2024-06-10 11:29:52.429647] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:23.733 [2024-06-10 11:29:52.429731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132329 ] 00:21:23.733 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.733 [2024-06-10 11:29:52.489459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.733 [2024-06-10 11:29:52.553313] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:23.733 11:29:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:21:25.117 11:29:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:25.117 00:21:25.117 real 0m1.286s 00:21:25.117 user 0m1.204s 00:21:25.117 sys 0m0.096s 00:21:25.117 11:29:53 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:25.117 11:29:53 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:21:25.117 ************************************ 00:21:25.117 END TEST accel_comp 00:21:25.117 ************************************ 00:21:25.117 11:29:53 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:25.117 11:29:53 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:21:25.117 11:29:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:25.117 11:29:53 accel -- common/autotest_common.sh@10 -- # set +x 00:21:25.117 ************************************ 00:21:25.117 START TEST accel_decomp 00:21:25.117 ************************************ 00:21:25.117 11:29:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:21:25.117 [2024-06-10 11:29:53.793448] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:25.117 [2024-06-10 11:29:53.793509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132509 ] 00:21:25.117 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.117 [2024-06-10 11:29:53.853612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.117 [2024-06-10 11:29:53.917948] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:21:25.117 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:25.118 11:29:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:26.500 11:29:55 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.500 00:21:26.500 real 0m1.285s 00:21:26.500 user 0m1.203s 00:21:26.500 sys 0m0.095s 00:21:26.500 11:29:55 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:26.500 11:29:55 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:21:26.500 ************************************ 00:21:26.500 END TEST accel_decomp 00:21:26.500 ************************************ 00:21:26.500 11:29:55 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:21:26.500 11:29:55 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:21:26.500 11:29:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:26.500 11:29:55 accel -- common/autotest_common.sh@10 -- # set +x 00:21:26.500 ************************************ 00:21:26.500 START TEST accel_decomp_full 00:21:26.500 ************************************ 00:21:26.500 11:29:55 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:21:26.500 [2024-06-10 11:29:55.152436] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:26.500 [2024-06-10 11:29:55.152530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132781 ] 00:21:26.500 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.500 [2024-06-10 11:29:55.215486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.500 [2024-06-10 11:29:55.278361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.500 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:26.501 11:29:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.442 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.442 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.442 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.442 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.442 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:27.703 11:29:56 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:27.703 00:21:27.703 real 0m1.295s 00:21:27.703 user 0m1.202s 00:21:27.703 sys 0m0.105s 00:21:27.703 11:29:56 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:27.703 11:29:56 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:21:27.703 ************************************ 00:21:27.703 END TEST accel_decomp_full 00:21:27.703 ************************************ 00:21:27.703 11:29:56 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:21:27.703 11:29:56 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:21:27.703 11:29:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:27.703 11:29:56 accel -- common/autotest_common.sh@10 -- # set +x 00:21:27.703 ************************************ 00:21:27.703 START TEST accel_decomp_mcore 00:21:27.703 ************************************ 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.703 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:21:27.704 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:21:27.704 [2024-06-10 11:29:56.521606] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:27.704 [2024-06-10 11:29:56.521677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133134 ] 00:21:27.704 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.704 [2024-06-10 11:29:56.582701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.704 [2024-06-10 11:29:56.650453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.704 [2024-06-10 11:29:56.650593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.704 [2024-06-10 11:29:56.650963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.704 [2024-06-10 11:29:56.650964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:27.964 11:29:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:28.905 00:21:28.905 real 0m1.294s 00:21:28.905 user 0m4.426s 00:21:28.905 sys 0m0.106s 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:28.905 11:29:57 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:21:28.905 ************************************ 00:21:28.905 END TEST accel_decomp_mcore 00:21:28.906 ************************************ 00:21:28.906 11:29:57 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:28.906 11:29:57 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:21:28.906 11:29:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:28.906 11:29:57 accel -- common/autotest_common.sh@10 -- # set +x 00:21:28.906 ************************************ 00:21:28.906 START TEST accel_decomp_full_mcore 00:21:28.906 ************************************ 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:21:28.906 11:29:57 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:21:29.166 [2024-06-10 11:29:57.891789] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:29.166 [2024-06-10 11:29:57.891851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133486 ] 00:21:29.166 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.166 [2024-06-10 11:29:57.952591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.166 [2024-06-10 11:29:58.020854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.166 [2024-06-10 11:29:58.020994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.166 [2024-06-10 11:29:58.021159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.166 [2024-06-10 11:29:58.021159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.166 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:29.167 11:29:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.550 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:30.551 00:21:30.551 real 0m1.309s 00:21:30.551 user 0m4.486s 00:21:30.551 sys 0m0.105s 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:30.551 11:29:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:21:30.551 ************************************ 00:21:30.551 END TEST accel_decomp_full_mcore 00:21:30.551 ************************************ 00:21:30.551 11:29:59 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:21:30.551 11:29:59 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:21:30.551 11:29:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:30.551 11:29:59 accel -- common/autotest_common.sh@10 -- # set +x 00:21:30.551 ************************************ 00:21:30.551 START TEST accel_decomp_mthread 00:21:30.551 ************************************ 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:21:30.551 [2024-06-10 11:29:59.277923] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:30.551 [2024-06-10 11:29:59.278017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133809 ] 00:21:30.551 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.551 [2024-06-10 11:29:59.339464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.551 [2024-06-10 11:29:59.404875] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:30.551 11:29:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:31.935 00:21:31.935 real 0m1.293s 00:21:31.935 user 0m1.210s 00:21:31.935 sys 0m0.097s 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:31.935 11:30:00 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:21:31.935 ************************************ 00:21:31.935 END TEST accel_decomp_mthread 00:21:31.935 ************************************ 00:21:31.935 11:30:00 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:21:31.935 11:30:00 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:21:31.935 11:30:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:31.935 11:30:00 accel -- common/autotest_common.sh@10 -- # set +x 00:21:31.935 ************************************ 00:21:31.935 START TEST accel_decomp_full_mthread 00:21:31.935 ************************************ 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:21:31.935 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:21:31.936 [2024-06-10 11:30:00.649442] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:31.936 [2024-06-10 11:30:00.649541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134043 ] 00:21:31.936 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.936 [2024-06-10 11:30:00.715085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.936 [2024-06-10 11:30:00.787442] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:31.936 11:30:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:33.321 00:21:33.321 real 0m1.328s 00:21:33.321 user 0m1.237s 00:21:33.321 sys 0m0.102s 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:33.321 11:30:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 ************************************ 00:21:33.321 END TEST accel_decomp_full_mthread 00:21:33.321 ************************************ 00:21:33.321 11:30:01 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:21:33.321 11:30:01 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:33.321 11:30:01 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:21:33.321 11:30:01 accel -- accel/accel.sh@137 -- # build_accel_config 00:21:33.321 11:30:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:33.321 11:30:01 accel -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 11:30:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:21:33.321 11:30:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:21:33.321 11:30:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:21:33.321 11:30:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:21:33.321 11:30:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:21:33.321 11:30:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:21:33.321 11:30:01 accel -- accel/accel.sh@41 -- # jq -r . 00:21:33.321 ************************************ 00:21:33.321 START TEST accel_dif_functional_tests 00:21:33.321 ************************************ 00:21:33.321 11:30:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:21:33.321 [2024-06-10 11:30:02.079749] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:33.321 [2024-06-10 11:30:02.079802] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134339 ] 00:21:33.321 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.321 [2024-06-10 11:30:02.144400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:33.321 [2024-06-10 11:30:02.220525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.321 [2024-06-10 11:30:02.220666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.321 [2024-06-10 11:30:02.220676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.321 00:21:33.321 00:21:33.321 CUnit - A unit testing framework for C - Version 2.1-3 00:21:33.321 http://cunit.sourceforge.net/ 00:21:33.321 00:21:33.321 00:21:33.321 Suite: accel_dif 00:21:33.321 Test: verify: DIF generated, GUARD check ...passed 00:21:33.321 Test: verify: DIF generated, APPTAG check ...passed 00:21:33.321 Test: verify: DIF generated, REFTAG check ...passed 00:21:33.321 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:30:02.276461] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:33.321 passed 00:21:33.321 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:30:02.276502] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:33.321 passed 00:21:33.321 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:30:02.276523] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:33.321 passed 00:21:33.321 Test: verify: APPTAG correct, APPTAG check ...passed 00:21:33.321 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:30:02.276571] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:21:33.321 passed 00:21:33.321 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:21:33.321 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:21:33.321 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:21:33.321 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:30:02.276689] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:21:33.321 passed 00:21:33.321 Test: verify copy: DIF generated, GUARD check ...passed 00:21:33.321 Test: verify copy: DIF generated, APPTAG check ...passed 00:21:33.321 Test: verify copy: DIF generated, REFTAG check ...passed 00:21:33.321 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:30:02.276809] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:21:33.321 passed 00:21:33.321 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:30:02.276831] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:21:33.321 passed 00:21:33.321 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:30:02.276852] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:21:33.321 passed 00:21:33.321 Test: generate copy: DIF generated, GUARD check ...passed 00:21:33.321 Test: generate copy: DIF generated, APTTAG check ...passed 00:21:33.321 Test: generate copy: DIF generated, REFTAG check ...passed 00:21:33.321 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:21:33.321 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:21:33.322 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:21:33.322 Test: generate copy: iovecs-len validate ...[2024-06-10 11:30:02.277043] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:21:33.322 passed 00:21:33.322 Test: generate copy: buffer alignment validate ...passed 00:21:33.322 00:21:33.322 Run Summary: Type Total Ran Passed Failed Inactive 00:21:33.322 suites 1 1 n/a 0 0 00:21:33.322 tests 26 26 26 0 0 00:21:33.322 asserts 115 115 115 0 n/a 00:21:33.322 00:21:33.322 Elapsed time = 0.002 seconds 00:21:33.583 00:21:33.583 real 0m0.371s 00:21:33.583 user 0m0.502s 00:21:33.583 sys 0m0.133s 00:21:33.583 11:30:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:33.583 11:30:02 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:21:33.583 ************************************ 00:21:33.583 END TEST accel_dif_functional_tests 00:21:33.583 ************************************ 00:21:33.583 00:21:33.583 real 0m30.157s 00:21:33.583 user 0m33.904s 00:21:33.583 sys 0m4.079s 00:21:33.583 11:30:02 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:33.583 11:30:02 accel -- common/autotest_common.sh@10 -- # set +x 00:21:33.583 ************************************ 00:21:33.583 END TEST accel 00:21:33.583 ************************************ 00:21:33.583 11:30:02 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:21:33.583 11:30:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:33.583 11:30:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:33.583 11:30:02 -- common/autotest_common.sh@10 -- # set +x 00:21:33.583 ************************************ 00:21:33.583 START TEST accel_rpc 00:21:33.583 ************************************ 00:21:33.583 11:30:02 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:21:33.844 * Looking for test storage... 00:21:33.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:21:33.844 11:30:02 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:33.844 11:30:02 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2134665 00:21:33.844 11:30:02 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2134665 00:21:33.844 11:30:02 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 2134665 ']' 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:33.844 11:30:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:33.844 [2024-06-10 11:30:02.678201] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:33.844 [2024-06-10 11:30:02.678276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134665 ] 00:21:33.844 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.844 [2024-06-10 11:30:02.742185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.104 [2024-06-10 11:30:02.816816] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.676 11:30:03 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:34.676 11:30:03 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:21:34.676 11:30:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:21:34.676 11:30:03 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:21:34.676 11:30:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:21:34.676 11:30:03 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:21:34.676 11:30:03 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:21:34.676 11:30:03 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:34.676 11:30:03 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:34.676 11:30:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:34.676 ************************************ 00:21:34.676 START TEST accel_assign_opcode 00:21:34.676 ************************************ 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:34.676 [2024-06-10 11:30:03.583009] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:34.676 [2024-06-10 11:30:03.595032] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.676 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.937 software 00:21:34.937 00:21:34.937 real 0m0.208s 00:21:34.937 user 0m0.047s 00:21:34.937 sys 0m0.012s 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:34.937 11:30:03 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:21:34.937 ************************************ 00:21:34.937 END TEST accel_assign_opcode 00:21:34.937 ************************************ 00:21:34.937 11:30:03 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2134665 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 2134665 ']' 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 2134665 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2134665 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2134665' 00:21:34.937 killing process with pid 2134665 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@968 -- # kill 2134665 00:21:34.937 11:30:03 accel_rpc -- common/autotest_common.sh@973 -- # wait 2134665 00:21:35.197 00:21:35.197 real 0m1.571s 00:21:35.197 user 0m1.743s 00:21:35.197 sys 0m0.430s 00:21:35.197 11:30:04 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:35.197 11:30:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:35.197 ************************************ 00:21:35.197 END TEST accel_rpc 00:21:35.197 ************************************ 00:21:35.197 11:30:04 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:21:35.197 11:30:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:35.197 11:30:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:35.197 11:30:04 -- common/autotest_common.sh@10 -- # set +x 00:21:35.197 ************************************ 00:21:35.197 START TEST app_cmdline 00:21:35.197 ************************************ 00:21:35.197 11:30:04 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:21:35.459 * Looking for test storage... 00:21:35.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:21:35.459 11:30:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:21:35.459 11:30:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2135053 00:21:35.459 11:30:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2135053 00:21:35.459 11:30:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 2135053 ']' 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.459 11:30:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:35.459 [2024-06-10 11:30:04.325492] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:21:35.459 [2024-06-10 11:30:04.325562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135053 ] 00:21:35.459 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.459 [2024-06-10 11:30:04.391460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.719 [2024-06-10 11:30:04.466238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.294 11:30:05 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:36.294 11:30:05 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:21:36.294 11:30:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:21:36.554 { 00:21:36.554 "version": "SPDK v24.09-pre git sha1 ee2eae53a", 00:21:36.554 "fields": { 00:21:36.554 "major": 24, 00:21:36.555 "minor": 9, 00:21:36.555 "patch": 0, 00:21:36.555 "suffix": "-pre", 00:21:36.555 "commit": "ee2eae53a" 00:21:36.555 } 00:21:36.555 } 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:21:36.555 11:30:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:36.555 11:30:05 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:36.816 request: 00:21:36.816 { 00:21:36.816 "method": "env_dpdk_get_mem_stats", 00:21:36.816 "req_id": 1 00:21:36.816 } 00:21:36.816 Got JSON-RPC error response 00:21:36.816 response: 00:21:36.816 { 00:21:36.816 "code": -32601, 00:21:36.816 "message": "Method not found" 00:21:36.816 } 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:36.816 11:30:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2135053 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 2135053 ']' 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 2135053 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2135053 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2135053' 00:21:36.816 killing process with pid 2135053 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@968 -- # kill 2135053 00:21:36.816 11:30:05 app_cmdline -- common/autotest_common.sh@973 -- # wait 2135053 00:21:37.078 00:21:37.078 real 0m1.729s 00:21:37.078 user 0m2.186s 00:21:37.078 sys 0m0.429s 00:21:37.078 11:30:05 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:37.078 11:30:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:37.078 ************************************ 00:21:37.078 END TEST app_cmdline 00:21:37.078 ************************************ 00:21:37.078 11:30:05 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:21:37.078 11:30:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:21:37.078 11:30:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.078 11:30:05 -- common/autotest_common.sh@10 -- # set +x 00:21:37.078 ************************************ 00:21:37.078 START TEST version 00:21:37.078 ************************************ 00:21:37.078 11:30:05 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:21:37.340 * Looking for test storage... 00:21:37.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:21:37.340 11:30:06 version -- app/version.sh@17 -- # get_header_version major 00:21:37.340 11:30:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # cut -f2 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # tr -d '"' 00:21:37.340 11:30:06 version -- app/version.sh@17 -- # major=24 00:21:37.340 11:30:06 version -- app/version.sh@18 -- # get_header_version minor 00:21:37.340 11:30:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # cut -f2 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # tr -d '"' 00:21:37.340 11:30:06 version -- app/version.sh@18 -- # minor=9 00:21:37.340 11:30:06 version -- app/version.sh@19 -- # get_header_version patch 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # tr -d '"' 00:21:37.340 11:30:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # cut -f2 00:21:37.340 11:30:06 version -- app/version.sh@19 -- # patch=0 00:21:37.340 11:30:06 version -- app/version.sh@20 -- # get_header_version suffix 00:21:37.340 11:30:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # cut -f2 00:21:37.340 11:30:06 version -- app/version.sh@14 -- # tr -d '"' 00:21:37.340 11:30:06 version -- app/version.sh@20 -- # suffix=-pre 00:21:37.340 11:30:06 version -- app/version.sh@22 -- # version=24.9 00:21:37.340 11:30:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:21:37.340 11:30:06 version -- app/version.sh@28 -- # version=24.9rc0 00:21:37.340 11:30:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:21:37.340 11:30:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:21:37.340 11:30:06 version -- app/version.sh@30 -- # py_version=24.9rc0 00:21:37.340 11:30:06 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:21:37.340 00:21:37.340 real 0m0.185s 00:21:37.340 user 0m0.089s 00:21:37.340 sys 0m0.136s 00:21:37.340 11:30:06 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:37.340 11:30:06 version -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 ************************************ 00:21:37.340 END TEST version 00:21:37.340 ************************************ 00:21:37.340 11:30:06 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@198 -- # uname -s 00:21:37.340 11:30:06 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:21:37.340 11:30:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:21:37.340 11:30:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:21:37.340 11:30:06 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:37.340 11:30:06 -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:37.340 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 11:30:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:21:37.340 11:30:06 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:21:37.340 11:30:06 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:21:37.340 11:30:06 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:37.340 11:30:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.340 11:30:06 -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 ************************************ 00:21:37.340 START TEST nvmf_tcp 00:21:37.340 ************************************ 00:21:37.340 11:30:06 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:21:37.602 * Looking for test storage... 00:21:37.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.602 11:30:06 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.602 11:30:06 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.602 11:30:06 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.602 11:30:06 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.602 11:30:06 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.602 11:30:06 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.602 11:30:06 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:37.602 11:30:06 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:21:37.602 11:30:06 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:37.602 11:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:21:37.602 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:21:37.602 11:30:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:37.602 11:30:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.602 11:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.602 ************************************ 00:21:37.602 START TEST nvmf_example 00:21:37.602 ************************************ 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:21:37.602 * Looking for test storage... 00:21:37.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.602 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.863 11:30:06 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.864 11:30:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.066 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:46.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:46.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:46.067 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:46.067 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:21:46.067 00:21:46.067 --- 10.0.0.2 ping statistics --- 00:21:46.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.067 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:21:46.067 00:21:46.067 --- 10.0.0.1 ping statistics --- 00:21:46.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.067 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2139695 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2139695 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 2139695 ']' 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:46.067 11:30:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.067 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:46.068 11:30:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:46.068 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.300 Initializing NVMe Controllers 00:21:58.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:58.300 Initialization complete. Launching workers. 00:21:58.300 ======================================================== 00:21:58.300 Latency(us) 00:21:58.300 Device Information : IOPS MiB/s Average min max 00:21:58.300 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16797.07 65.61 3809.88 848.18 16366.97 00:21:58.300 ======================================================== 00:21:58.300 Total : 16797.07 65.61 3809.88 848.18 16366.97 00:21:58.300 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.300 rmmod nvme_tcp 00:21:58.300 rmmod nvme_fabrics 00:21:58.300 rmmod nvme_keyring 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2139695 ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2139695 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 2139695 ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 2139695 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2139695 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2139695' 00:21:58.300 killing process with pid 2139695 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 2139695 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 2139695 00:21:58.300 nvmf threads initialize successfully 00:21:58.300 bdev subsystem init successfully 00:21:58.300 created a nvmf target service 00:21:58.300 create targets's poll groups done 00:21:58.300 all subsystems of target started 00:21:58.300 nvmf target is running 00:21:58.300 all subsystems of target stopped 00:21:58.300 destroy targets's poll groups done 00:21:58.300 destroyed the nvmf target service 00:21:58.300 bdev subsystem finish successfully 00:21:58.300 nvmf threads destroy successfully 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.300 11:30:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:58.561 00:21:58.561 real 0m20.983s 00:21:58.561 user 0m46.669s 00:21:58.561 sys 0m6.521s 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:58.561 11:30:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:21:58.561 ************************************ 00:21:58.561 END TEST nvmf_example 00:21:58.561 ************************************ 00:21:58.561 11:30:27 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:21:58.561 11:30:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:58.561 11:30:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:58.561 11:30:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.561 ************************************ 00:21:58.561 START TEST nvmf_filesystem 00:21:58.561 ************************************ 00:21:58.561 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:21:58.824 * Looking for test storage... 00:21:58.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:21:58.824 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:21:58.825 #define SPDK_CONFIG_H 00:21:58.825 #define SPDK_CONFIG_APPS 1 00:21:58.825 #define SPDK_CONFIG_ARCH native 00:21:58.825 #undef SPDK_CONFIG_ASAN 00:21:58.825 #undef SPDK_CONFIG_AVAHI 00:21:58.825 #undef SPDK_CONFIG_CET 00:21:58.825 #define SPDK_CONFIG_COVERAGE 1 00:21:58.825 #define SPDK_CONFIG_CROSS_PREFIX 00:21:58.825 #undef SPDK_CONFIG_CRYPTO 00:21:58.825 #undef SPDK_CONFIG_CRYPTO_MLX5 00:21:58.825 #undef SPDK_CONFIG_CUSTOMOCF 00:21:58.825 #undef SPDK_CONFIG_DAOS 00:21:58.825 #define SPDK_CONFIG_DAOS_DIR 00:21:58.825 #define SPDK_CONFIG_DEBUG 1 00:21:58.825 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:21:58.825 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:21:58.825 #define SPDK_CONFIG_DPDK_INC_DIR 00:21:58.825 #define SPDK_CONFIG_DPDK_LIB_DIR 00:21:58.825 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:21:58.825 #undef SPDK_CONFIG_DPDK_UADK 00:21:58.825 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:21:58.825 #define SPDK_CONFIG_EXAMPLES 1 00:21:58.825 #undef SPDK_CONFIG_FC 00:21:58.825 #define SPDK_CONFIG_FC_PATH 00:21:58.825 #define SPDK_CONFIG_FIO_PLUGIN 1 00:21:58.825 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:21:58.825 #undef SPDK_CONFIG_FUSE 00:21:58.825 #undef SPDK_CONFIG_FUZZER 00:21:58.825 #define SPDK_CONFIG_FUZZER_LIB 00:21:58.825 #undef SPDK_CONFIG_GOLANG 00:21:58.825 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:21:58.825 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:21:58.825 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:21:58.825 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:21:58.825 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:21:58.825 #undef SPDK_CONFIG_HAVE_LIBBSD 00:21:58.825 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:21:58.825 #define SPDK_CONFIG_IDXD 1 00:21:58.825 #define SPDK_CONFIG_IDXD_KERNEL 1 00:21:58.825 #undef SPDK_CONFIG_IPSEC_MB 00:21:58.825 #define SPDK_CONFIG_IPSEC_MB_DIR 00:21:58.825 #define SPDK_CONFIG_ISAL 1 00:21:58.825 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:21:58.825 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:21:58.825 #define SPDK_CONFIG_LIBDIR 00:21:58.825 #undef SPDK_CONFIG_LTO 00:21:58.825 #define SPDK_CONFIG_MAX_LCORES 00:21:58.825 #define SPDK_CONFIG_NVME_CUSE 1 00:21:58.825 #undef SPDK_CONFIG_OCF 00:21:58.825 #define SPDK_CONFIG_OCF_PATH 00:21:58.825 #define SPDK_CONFIG_OPENSSL_PATH 00:21:58.825 #undef SPDK_CONFIG_PGO_CAPTURE 00:21:58.825 #define SPDK_CONFIG_PGO_DIR 00:21:58.825 #undef SPDK_CONFIG_PGO_USE 00:21:58.825 #define SPDK_CONFIG_PREFIX /usr/local 00:21:58.825 #undef SPDK_CONFIG_RAID5F 00:21:58.825 #undef SPDK_CONFIG_RBD 00:21:58.825 #define SPDK_CONFIG_RDMA 1 00:21:58.825 #define SPDK_CONFIG_RDMA_PROV verbs 00:21:58.825 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:21:58.825 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:21:58.825 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:21:58.825 #define SPDK_CONFIG_SHARED 1 00:21:58.825 #undef SPDK_CONFIG_SMA 00:21:58.825 #define SPDK_CONFIG_TESTS 1 00:21:58.825 #undef SPDK_CONFIG_TSAN 00:21:58.825 #define SPDK_CONFIG_UBLK 1 00:21:58.825 #define SPDK_CONFIG_UBSAN 1 00:21:58.825 #undef SPDK_CONFIG_UNIT_TESTS 00:21:58.825 #undef SPDK_CONFIG_URING 00:21:58.825 #define SPDK_CONFIG_URING_PATH 00:21:58.825 #undef SPDK_CONFIG_URING_ZNS 00:21:58.825 #undef SPDK_CONFIG_USDT 00:21:58.825 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:21:58.825 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:21:58.825 #define SPDK_CONFIG_VFIO_USER 1 00:21:58.825 #define SPDK_CONFIG_VFIO_USER_DIR 00:21:58.825 #define SPDK_CONFIG_VHOST 1 00:21:58.825 #define SPDK_CONFIG_VIRTIO 1 00:21:58.825 #undef SPDK_CONFIG_VTUNE 00:21:58.825 #define SPDK_CONFIG_VTUNE_DIR 00:21:58.825 #define SPDK_CONFIG_WERROR 1 00:21:58.825 #define SPDK_CONFIG_WPDK_DIR 00:21:58.825 #undef SPDK_CONFIG_XNVME 00:21:58.825 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:21:58.825 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:21:58.826 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2142491 ]] 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2142491 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.fcmwmI 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:21:58.827 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fcmwmI/tests/target /tmp/spdk.fcmwmI 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950431744 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4333998080 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118465675264 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371021312 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10905346048 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680800256 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685510656 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864507392 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874206720 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=372736 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=131072 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683999232 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685510656 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1511424 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:21:58.828 * Looking for test storage... 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118465675264 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13119938560 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.828 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.090 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.091 11:30:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.678 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.940 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:22:05.940 00:22:05.940 --- 10.0.0.2 ping statistics --- 00:22:05.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.941 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:22:05.941 00:22:05.941 --- 10.0.0.1 ping statistics --- 00:22:05.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.941 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:05.941 11:30:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 ************************************ 00:22:06.202 START TEST nvmf_filesystem_no_in_capsule 00:22:06.202 ************************************ 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2146133 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2146133 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2146133 ']' 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:06.202 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.203 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:06.203 11:30:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.203 [2024-06-10 11:30:34.968112] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:22:06.203 [2024-06-10 11:30:34.968164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.203 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.203 [2024-06-10 11:30:35.034399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.203 [2024-06-10 11:30:35.102306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.203 [2024-06-10 11:30:35.102341] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.203 [2024-06-10 11:30:35.102348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.203 [2024-06-10 11:30:35.102354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.203 [2024-06-10 11:30:35.102360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.203 [2024-06-10 11:30:35.102401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.203 [2024-06-10 11:30:35.102485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.203 [2024-06-10 11:30:35.102634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.203 [2024-06-10 11:30:35.102635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 [2024-06-10 11:30:35.248517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 [2024-06-10 11:30:35.359877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.465 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:22:06.465 { 00:22:06.465 "name": "Malloc1", 00:22:06.465 "aliases": [ 00:22:06.465 "95470ef2-d381-4a54-8289-56615d79da6c" 00:22:06.465 ], 00:22:06.465 "product_name": "Malloc disk", 00:22:06.465 "block_size": 512, 00:22:06.465 "num_blocks": 1048576, 00:22:06.465 "uuid": "95470ef2-d381-4a54-8289-56615d79da6c", 00:22:06.465 "assigned_rate_limits": { 00:22:06.465 "rw_ios_per_sec": 0, 00:22:06.465 "rw_mbytes_per_sec": 0, 00:22:06.465 "r_mbytes_per_sec": 0, 00:22:06.465 "w_mbytes_per_sec": 0 00:22:06.465 }, 00:22:06.465 "claimed": true, 00:22:06.465 "claim_type": "exclusive_write", 00:22:06.465 "zoned": false, 00:22:06.465 "supported_io_types": { 00:22:06.465 "read": true, 00:22:06.465 "write": true, 00:22:06.465 "unmap": true, 00:22:06.465 "write_zeroes": true, 00:22:06.465 "flush": true, 00:22:06.465 "reset": true, 00:22:06.465 "compare": false, 00:22:06.465 "compare_and_write": false, 00:22:06.465 "abort": true, 00:22:06.465 "nvme_admin": false, 00:22:06.465 "nvme_io": false 00:22:06.465 }, 00:22:06.465 "memory_domains": [ 00:22:06.465 { 00:22:06.465 "dma_device_id": "system", 00:22:06.465 "dma_device_type": 1 00:22:06.465 }, 00:22:06.465 { 00:22:06.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.465 "dma_device_type": 2 00:22:06.466 } 00:22:06.466 ], 00:22:06.466 "driver_specific": {} 00:22:06.466 } 00:22:06.466 ]' 00:22:06.466 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:22:06.466 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:22:06.727 11:30:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:08.114 11:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:22:08.114 11:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:22:08.114 11:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.114 11:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:08.114 11:30:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:22:10.664 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:22:11.236 11:30:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:12.181 ************************************ 00:22:12.181 START TEST filesystem_ext4 00:22:12.181 ************************************ 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:22:12.181 11:30:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:22:12.181 11:30:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:22:12.181 mke2fs 1.46.5 (30-Dec-2021) 00:22:12.181 Discarding device blocks: 0/522240 done 00:22:12.181 Creating filesystem with 522240 1k blocks and 130560 inodes 00:22:12.181 Filesystem UUID: f0754113-11f4-4624-af93-4786cf9ae343 00:22:12.181 Superblock backups stored on blocks: 00:22:12.181 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:22:12.181 00:22:12.181 Allocating group tables: 0/64 done 00:22:12.181 Writing inode tables: 0/64 done 00:22:12.442 Creating journal (8192 blocks): done 00:22:13.384 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:22:13.384 00:22:13.384 11:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:22:13.384 11:30:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2146133 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:14.329 00:22:14.329 real 0m2.169s 00:22:14.329 user 0m0.029s 00:22:14.329 sys 0m0.046s 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:22:14.329 ************************************ 00:22:14.329 END TEST filesystem_ext4 00:22:14.329 ************************************ 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:14.329 ************************************ 00:22:14.329 START TEST filesystem_btrfs 00:22:14.329 ************************************ 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:22:14.329 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:22:14.901 btrfs-progs v6.6.2 00:22:14.901 See https://btrfs.readthedocs.io for more information. 00:22:14.901 00:22:14.901 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:22:14.901 NOTE: several default settings have changed in version 5.15, please make sure 00:22:14.901 this does not affect your deployments: 00:22:14.901 - DUP for metadata (-m dup) 00:22:14.901 - enabled no-holes (-O no-holes) 00:22:14.901 - enabled free-space-tree (-R free-space-tree) 00:22:14.901 00:22:14.901 Label: (null) 00:22:14.901 UUID: 4a542cb0-acf6-4e12-b494-61f62b736e14 00:22:14.901 Node size: 16384 00:22:14.901 Sector size: 4096 00:22:14.901 Filesystem size: 510.00MiB 00:22:14.901 Block group profiles: 00:22:14.901 Data: single 8.00MiB 00:22:14.901 Metadata: DUP 32.00MiB 00:22:14.901 System: DUP 8.00MiB 00:22:14.901 SSD detected: yes 00:22:14.901 Zoned device: no 00:22:14.901 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:22:14.901 Runtime features: free-space-tree 00:22:14.901 Checksum: crc32c 00:22:14.901 Number of devices: 1 00:22:14.901 Devices: 00:22:14.901 ID SIZE PATH 00:22:14.901 1 510.00MiB /dev/nvme0n1p1 00:22:14.901 00:22:14.901 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:22:14.901 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2146133 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:15.164 00:22:15.164 real 0m0.686s 00:22:15.164 user 0m0.028s 00:22:15.164 sys 0m0.060s 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:22:15.164 ************************************ 00:22:15.164 END TEST filesystem_btrfs 00:22:15.164 ************************************ 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:15.164 11:30:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:15.164 ************************************ 00:22:15.164 START TEST filesystem_xfs 00:22:15.164 ************************************ 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:22:15.164 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:22:15.164 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:22:15.164 = sectsz=512 attr=2, projid32bit=1 00:22:15.164 = crc=1 finobt=1, sparse=1, rmapbt=0 00:22:15.164 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:22:15.164 data = bsize=4096 blocks=130560, imaxpct=25 00:22:15.164 = sunit=0 swidth=0 blks 00:22:15.164 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:22:15.164 log =internal log bsize=4096 blocks=16384, version=2 00:22:15.164 = sectsz=512 sunit=0 blks, lazy-count=1 00:22:15.164 realtime =none extsz=4096 blocks=0, rtextents=0 00:22:16.108 Discarding blocks...Done. 00:22:16.108 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:22:16.108 11:30:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2146133 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:18.657 00:22:18.657 real 0m3.488s 00:22:18.657 user 0m0.021s 00:22:18.657 sys 0m0.058s 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:22:18.657 ************************************ 00:22:18.657 END TEST filesystem_xfs 00:22:18.657 ************************************ 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:22:18.657 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:18.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2146133 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2146133 ']' 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2146133 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2146133 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2146133' 00:22:18.935 killing process with pid 2146133 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2146133 00:22:18.935 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2146133 00:22:19.288 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:22:19.288 00:22:19.288 real 0m13.067s 00:22:19.288 user 0m51.478s 00:22:19.288 sys 0m1.034s 00:22:19.288 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:19.288 11:30:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.288 ************************************ 00:22:19.288 END TEST nvmf_filesystem_no_in_capsule 00:22:19.288 ************************************ 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:19.288 ************************************ 00:22:19.288 START TEST nvmf_filesystem_in_capsule 00:22:19.288 ************************************ 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:22:19.288 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2149015 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2149015 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2149015 ']' 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:19.289 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.289 [2024-06-10 11:30:48.126832] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:22:19.289 [2024-06-10 11:30:48.126877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.289 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.289 [2024-06-10 11:30:48.190487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.289 [2024-06-10 11:30:48.256436] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.289 [2024-06-10 11:30:48.256472] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.289 [2024-06-10 11:30:48.256480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.289 [2024-06-10 11:30:48.256486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.289 [2024-06-10 11:30:48.256492] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.289 [2024-06-10 11:30:48.256613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.289 [2024-06-10 11:30:48.256746] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.289 [2024-06-10 11:30:48.256844] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.289 [2024-06-10 11:30:48.256845] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 [2024-06-10 11:30:48.396570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.550 [2024-06-10 11:30:48.510915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:22:19.550 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:22:19.812 { 00:22:19.812 "name": "Malloc1", 00:22:19.812 "aliases": [ 00:22:19.812 "ac3cfb56-51b5-423e-8b54-5ddd64b0dd8e" 00:22:19.812 ], 00:22:19.812 "product_name": "Malloc disk", 00:22:19.812 "block_size": 512, 00:22:19.812 "num_blocks": 1048576, 00:22:19.812 "uuid": "ac3cfb56-51b5-423e-8b54-5ddd64b0dd8e", 00:22:19.812 "assigned_rate_limits": { 00:22:19.812 "rw_ios_per_sec": 0, 00:22:19.812 "rw_mbytes_per_sec": 0, 00:22:19.812 "r_mbytes_per_sec": 0, 00:22:19.812 "w_mbytes_per_sec": 0 00:22:19.812 }, 00:22:19.812 "claimed": true, 00:22:19.812 "claim_type": "exclusive_write", 00:22:19.812 "zoned": false, 00:22:19.812 "supported_io_types": { 00:22:19.812 "read": true, 00:22:19.812 "write": true, 00:22:19.812 "unmap": true, 00:22:19.812 "write_zeroes": true, 00:22:19.812 "flush": true, 00:22:19.812 "reset": true, 00:22:19.812 "compare": false, 00:22:19.812 "compare_and_write": false, 00:22:19.812 "abort": true, 00:22:19.812 "nvme_admin": false, 00:22:19.812 "nvme_io": false 00:22:19.812 }, 00:22:19.812 "memory_domains": [ 00:22:19.812 { 00:22:19.812 "dma_device_id": "system", 00:22:19.812 "dma_device_type": 1 00:22:19.812 }, 00:22:19.812 { 00:22:19.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.812 "dma_device_type": 2 00:22:19.812 } 00:22:19.812 ], 00:22:19.812 "driver_specific": {} 00:22:19.812 } 00:22:19.812 ]' 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:22:19.812 11:30:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:21.197 11:30:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:22:21.197 11:30:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:22:21.197 11:30:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.197 11:30:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:21.197 11:30:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:22:23.742 11:30:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:22:24.313 11:30:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:25.255 ************************************ 00:22:25.255 START TEST filesystem_in_capsule_ext4 00:22:25.255 ************************************ 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:22:25.255 11:30:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:22:25.255 mke2fs 1.46.5 (30-Dec-2021) 00:22:25.255 Discarding device blocks: 0/522240 done 00:22:25.255 Creating filesystem with 522240 1k blocks and 130560 inodes 00:22:25.255 Filesystem UUID: 2bde55d5-f8a6-42d6-b6b3-4a75a3142a1c 00:22:25.255 Superblock backups stored on blocks: 00:22:25.255 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:22:25.255 00:22:25.255 Allocating group tables: 0/64 done 00:22:25.255 Writing inode tables: 0/64 done 00:22:25.515 Creating journal (8192 blocks): done 00:22:26.607 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:22:26.607 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2149015 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:26.607 00:22:26.607 real 0m1.494s 00:22:26.607 user 0m0.026s 00:22:26.607 sys 0m0.046s 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:26.607 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.607 ************************************ 00:22:26.607 END TEST filesystem_in_capsule_ext4 00:22:26.607 ************************************ 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:26.868 ************************************ 00:22:26.868 START TEST filesystem_in_capsule_btrfs 00:22:26.868 ************************************ 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:22:26.868 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:22:27.129 btrfs-progs v6.6.2 00:22:27.129 See https://btrfs.readthedocs.io for more information. 00:22:27.129 00:22:27.129 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:22:27.129 NOTE: several default settings have changed in version 5.15, please make sure 00:22:27.129 this does not affect your deployments: 00:22:27.129 - DUP for metadata (-m dup) 00:22:27.129 - enabled no-holes (-O no-holes) 00:22:27.129 - enabled free-space-tree (-R free-space-tree) 00:22:27.129 00:22:27.129 Label: (null) 00:22:27.129 UUID: 34bb1b38-6ab4-43e4-809f-69e70b89530f 00:22:27.129 Node size: 16384 00:22:27.129 Sector size: 4096 00:22:27.129 Filesystem size: 510.00MiB 00:22:27.129 Block group profiles: 00:22:27.129 Data: single 8.00MiB 00:22:27.129 Metadata: DUP 32.00MiB 00:22:27.129 System: DUP 8.00MiB 00:22:27.129 SSD detected: yes 00:22:27.129 Zoned device: no 00:22:27.129 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:22:27.129 Runtime features: free-space-tree 00:22:27.129 Checksum: crc32c 00:22:27.129 Number of devices: 1 00:22:27.129 Devices: 00:22:27.129 ID SIZE PATH 00:22:27.129 1 510.00MiB /dev/nvme0n1p1 00:22:27.129 00:22:27.129 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:22:27.129 11:30:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:22:27.701 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2149015 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:27.702 00:22:27.702 real 0m0.987s 00:22:27.702 user 0m0.021s 00:22:27.702 sys 0m0.068s 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:22:27.702 ************************************ 00:22:27.702 END TEST filesystem_in_capsule_btrfs 00:22:27.702 ************************************ 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:27.702 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:27.962 ************************************ 00:22:27.962 START TEST filesystem_in_capsule_xfs 00:22:27.962 ************************************ 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:22:27.962 11:30:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:22:27.962 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:22:27.962 = sectsz=512 attr=2, projid32bit=1 00:22:27.962 = crc=1 finobt=1, sparse=1, rmapbt=0 00:22:27.962 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:22:27.962 data = bsize=4096 blocks=130560, imaxpct=25 00:22:27.962 = sunit=0 swidth=0 blks 00:22:27.962 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:22:27.962 log =internal log bsize=4096 blocks=16384, version=2 00:22:27.962 = sectsz=512 sunit=0 blks, lazy-count=1 00:22:27.962 realtime =none extsz=4096 blocks=0, rtextents=0 00:22:28.904 Discarding blocks...Done. 00:22:28.904 11:30:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:22:28.904 11:30:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2149015 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:22:30.816 00:22:30.816 real 0m3.023s 00:22:30.816 user 0m0.021s 00:22:30.816 sys 0m0.057s 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:22:30.816 ************************************ 00:22:30.816 END TEST filesystem_in_capsule_xfs 00:22:30.816 ************************************ 00:22:30.816 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:31.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:31.077 11:30:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2149015 00:22:31.077 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2149015 ']' 00:22:31.078 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2149015 00:22:31.078 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:22:31.078 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:31.078 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2149015 00:22:31.338 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2149015' 00:22:31.339 killing process with pid 2149015 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2149015 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2149015 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:22:31.339 00:22:31.339 real 0m12.242s 00:22:31.339 user 0m48.138s 00:22:31.339 sys 0m1.060s 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:31.339 11:31:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:22:31.599 ************************************ 00:22:31.599 END TEST nvmf_filesystem_in_capsule 00:22:31.599 ************************************ 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.599 rmmod nvme_tcp 00:22:31.599 rmmod nvme_fabrics 00:22:31.599 rmmod nvme_keyring 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.599 11:31:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.512 11:31:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.512 00:22:33.512 real 0m34.955s 00:22:33.512 user 1m41.702s 00:22:33.512 sys 0m7.584s 00:22:33.773 11:31:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:33.773 11:31:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:22:33.773 ************************************ 00:22:33.773 END TEST nvmf_filesystem 00:22:33.773 ************************************ 00:22:33.773 11:31:02 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:22:33.773 11:31:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:33.773 11:31:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:33.773 11:31:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.773 ************************************ 00:22:33.773 START TEST nvmf_target_discovery 00:22:33.773 ************************************ 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:22:33.773 * Looking for test storage... 00:22:33.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.773 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.774 11:31:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.365 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.366 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.366 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.626 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.887 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:22:40.887 00:22:40.887 --- 10.0.0.2 ping statistics --- 00:22:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.887 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:22:40.887 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:22:40.887 00:22:40.887 --- 10.0.0.1 ping statistics --- 00:22:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.888 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2155755 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2155755 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2155755 ']' 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:40.888 11:31:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.888 [2024-06-10 11:31:09.716447] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:22:40.888 [2024-06-10 11:31:09.716512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.888 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.888 [2024-06-10 11:31:09.787437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.149 [2024-06-10 11:31:09.863236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.149 [2024-06-10 11:31:09.863275] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.149 [2024-06-10 11:31:09.863282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.149 [2024-06-10 11:31:09.863289] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.149 [2024-06-10 11:31:09.863294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.149 [2024-06-10 11:31:09.863401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.149 [2024-06-10 11:31:09.863537] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.149 [2024-06-10 11:31:09.863714] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.149 [2024-06-10 11:31:09.863714] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 [2024-06-10 11:31:10.638503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 Null1 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.719 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 [2024-06-10 11:31:10.694813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 Null2 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 Null3 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 Null4 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.980 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:22:42.241 00:22:42.241 Discovery Log Number of Records 6, Generation counter 6 00:22:42.241 =====Discovery Log Entry 0====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: current discovery subsystem 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4420 00:22:42.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: explicit discovery connections, duplicate discovery information 00:22:42.241 sectype: none 00:22:42.241 =====Discovery Log Entry 1====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: nvme subsystem 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4420 00:22:42.241 subnqn: nqn.2016-06.io.spdk:cnode1 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: none 00:22:42.241 sectype: none 00:22:42.241 =====Discovery Log Entry 2====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: nvme subsystem 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4420 00:22:42.241 subnqn: nqn.2016-06.io.spdk:cnode2 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: none 00:22:42.241 sectype: none 00:22:42.241 =====Discovery Log Entry 3====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: nvme subsystem 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4420 00:22:42.241 subnqn: nqn.2016-06.io.spdk:cnode3 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: none 00:22:42.241 sectype: none 00:22:42.241 =====Discovery Log Entry 4====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: nvme subsystem 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4420 00:22:42.241 subnqn: nqn.2016-06.io.spdk:cnode4 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: none 00:22:42.241 sectype: none 00:22:42.241 =====Discovery Log Entry 5====== 00:22:42.241 trtype: tcp 00:22:42.241 adrfam: ipv4 00:22:42.241 subtype: discovery subsystem referral 00:22:42.241 treq: not required 00:22:42.241 portid: 0 00:22:42.241 trsvcid: 4430 00:22:42.241 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:42.241 traddr: 10.0.0.2 00:22:42.241 eflags: none 00:22:42.241 sectype: none 00:22:42.241 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:22:42.242 Perform nvmf subsystem discovery via RPC 00:22:42.242 11:31:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:22:42.242 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 [ 00:22:42.242 { 00:22:42.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.242 "subtype": "Discovery", 00:22:42.242 "listen_addresses": [ 00:22:42.242 { 00:22:42.242 "trtype": "TCP", 00:22:42.242 "adrfam": "IPv4", 00:22:42.242 "traddr": "10.0.0.2", 00:22:42.242 "trsvcid": "4420" 00:22:42.242 } 00:22:42.242 ], 00:22:42.242 "allow_any_host": true, 00:22:42.242 "hosts": [] 00:22:42.242 }, 00:22:42.242 { 00:22:42.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.242 "subtype": "NVMe", 00:22:42.242 "listen_addresses": [ 00:22:42.242 { 00:22:42.242 "trtype": "TCP", 00:22:42.242 "adrfam": "IPv4", 00:22:42.242 "traddr": "10.0.0.2", 00:22:42.242 "trsvcid": "4420" 00:22:42.242 } 00:22:42.242 ], 00:22:42.242 "allow_any_host": true, 00:22:42.242 "hosts": [], 00:22:42.242 "serial_number": "SPDK00000000000001", 00:22:42.242 "model_number": "SPDK bdev Controller", 00:22:42.242 "max_namespaces": 32, 00:22:42.242 "min_cntlid": 1, 00:22:42.242 "max_cntlid": 65519, 00:22:42.242 "namespaces": [ 00:22:42.242 { 00:22:42.242 "nsid": 1, 00:22:42.242 "bdev_name": "Null1", 00:22:42.242 "name": "Null1", 00:22:42.242 "nguid": "FBB80CACD3F9457E8D67F52E1C33DCAB", 00:22:42.242 "uuid": "fbb80cac-d3f9-457e-8d67-f52e1c33dcab" 00:22:42.242 } 00:22:42.242 ] 00:22:42.242 }, 00:22:42.242 { 00:22:42.242 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.242 "subtype": "NVMe", 00:22:42.242 "listen_addresses": [ 00:22:42.242 { 00:22:42.242 "trtype": "TCP", 00:22:42.242 "adrfam": "IPv4", 00:22:42.242 "traddr": "10.0.0.2", 00:22:42.242 "trsvcid": "4420" 00:22:42.242 } 00:22:42.242 ], 00:22:42.242 "allow_any_host": true, 00:22:42.242 "hosts": [], 00:22:42.242 "serial_number": "SPDK00000000000002", 00:22:42.242 "model_number": "SPDK bdev Controller", 00:22:42.242 "max_namespaces": 32, 00:22:42.242 "min_cntlid": 1, 00:22:42.242 "max_cntlid": 65519, 00:22:42.242 "namespaces": [ 00:22:42.242 { 00:22:42.242 "nsid": 1, 00:22:42.242 "bdev_name": "Null2", 00:22:42.242 "name": "Null2", 00:22:42.242 "nguid": "4A0990EDD45D4134AD2FB92D3791495D", 00:22:42.242 "uuid": "4a0990ed-d45d-4134-ad2f-b92d3791495d" 00:22:42.242 } 00:22:42.242 ] 00:22:42.242 }, 00:22:42.242 { 00:22:42.242 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.242 "subtype": "NVMe", 00:22:42.242 "listen_addresses": [ 00:22:42.242 { 00:22:42.242 "trtype": "TCP", 00:22:42.242 "adrfam": "IPv4", 00:22:42.242 "traddr": "10.0.0.2", 00:22:42.242 "trsvcid": "4420" 00:22:42.242 } 00:22:42.242 ], 00:22:42.242 "allow_any_host": true, 00:22:42.242 "hosts": [], 00:22:42.242 "serial_number": "SPDK00000000000003", 00:22:42.242 "model_number": "SPDK bdev Controller", 00:22:42.242 "max_namespaces": 32, 00:22:42.242 "min_cntlid": 1, 00:22:42.242 "max_cntlid": 65519, 00:22:42.242 "namespaces": [ 00:22:42.242 { 00:22:42.242 "nsid": 1, 00:22:42.242 "bdev_name": "Null3", 00:22:42.242 "name": "Null3", 00:22:42.242 "nguid": "6B1D80E942A8436DB459115454381FD2", 00:22:42.242 "uuid": "6b1d80e9-42a8-436d-b459-115454381fd2" 00:22:42.242 } 00:22:42.242 ] 00:22:42.242 }, 00:22:42.242 { 00:22:42.242 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.242 "subtype": "NVMe", 00:22:42.242 "listen_addresses": [ 00:22:42.242 { 00:22:42.242 "trtype": "TCP", 00:22:42.242 "adrfam": "IPv4", 00:22:42.242 "traddr": "10.0.0.2", 00:22:42.242 "trsvcid": "4420" 00:22:42.242 } 00:22:42.242 ], 00:22:42.242 "allow_any_host": true, 00:22:42.242 "hosts": [], 00:22:42.242 "serial_number": "SPDK00000000000004", 00:22:42.242 "model_number": "SPDK bdev Controller", 00:22:42.242 "max_namespaces": 32, 00:22:42.242 "min_cntlid": 1, 00:22:42.242 "max_cntlid": 65519, 00:22:42.242 "namespaces": [ 00:22:42.242 { 00:22:42.242 "nsid": 1, 00:22:42.242 "bdev_name": "Null4", 00:22:42.242 "name": "Null4", 00:22:42.242 "nguid": "755E74DD4FEE4E6493CB637088A85BCA", 00:22:42.242 "uuid": "755e74dd-4fee-4e64-93cb-637088a85bca" 00:22:42.242 } 00:22:42.242 ] 00:22:42.242 } 00:22:42.242 ] 00:22:42.242 11:31:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.242 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.242 rmmod nvme_tcp 00:22:42.242 rmmod nvme_fabrics 00:22:42.242 rmmod nvme_keyring 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2155755 ']' 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2155755 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2155755 ']' 00:22:42.243 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2155755 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2155755 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2155755' 00:22:42.503 killing process with pid 2155755 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2155755 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2155755 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.503 11:31:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.049 11:31:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.049 00:22:45.049 real 0m10.910s 00:22:45.049 user 0m8.365s 00:22:45.049 sys 0m5.511s 00:22:45.049 11:31:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:45.049 11:31:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.049 ************************************ 00:22:45.049 END TEST nvmf_target_discovery 00:22:45.049 ************************************ 00:22:45.049 11:31:13 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:22:45.049 11:31:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:45.049 11:31:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:45.049 11:31:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.049 ************************************ 00:22:45.049 START TEST nvmf_referrals 00:22:45.049 ************************************ 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:22:45.049 * Looking for test storage... 00:22:45.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.049 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.050 11:31:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:51.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:22:51.707 00:22:51.707 --- 10.0.0.2 ping statistics --- 00:22:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.707 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:22:51.707 00:22:51.707 --- 10.0.0.1 ping statistics --- 00:22:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.707 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:22:51.707 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2160279 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2160279 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2160279 ']' 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:51.708 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.968 [2024-06-10 11:31:20.679849] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:22:51.968 [2024-06-10 11:31:20.679900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.968 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.968 [2024-06-10 11:31:20.744559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.968 [2024-06-10 11:31:20.811385] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.968 [2024-06-10 11:31:20.811420] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.968 [2024-06-10 11:31:20.811428] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.968 [2024-06-10 11:31:20.811435] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.968 [2024-06-10 11:31:20.811441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.968 [2024-06-10 11:31:20.811543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.968 [2024-06-10 11:31:20.811684] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.968 [2024-06-10 11:31:20.811842] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.968 [2024-06-10 11:31:20.811843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.968 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:51.968 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:22:51.968 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.968 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:51.968 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 [2024-06-10 11:31:20.957504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 [2024-06-10 11:31:20.973724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:52.229 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:52.490 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:52.751 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:22:53.012 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:53.274 11:31:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:53.274 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:22:53.534 11:31:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.535 rmmod nvme_tcp 00:22:53.535 rmmod nvme_fabrics 00:22:53.535 rmmod nvme_keyring 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2160279 ']' 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2160279 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2160279 ']' 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2160279 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2160279 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2160279' 00:22:53.535 killing process with pid 2160279 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2160279 00:22:53.535 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2160279 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.795 11:31:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.341 11:31:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.341 00:22:56.341 real 0m11.135s 00:22:56.341 user 0m9.942s 00:22:56.341 sys 0m5.679s 00:22:56.341 11:31:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:56.341 11:31:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 ************************************ 00:22:56.341 END TEST nvmf_referrals 00:22:56.341 ************************************ 00:22:56.341 11:31:24 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:22:56.341 11:31:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:56.341 11:31:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:56.341 11:31:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.341 ************************************ 00:22:56.341 START TEST nvmf_connect_disconnect 00:22:56.341 ************************************ 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:22:56.341 * Looking for test storage... 00:22:56.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.341 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.342 11:31:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.934 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.195 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.195 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.195 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.195 11:31:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:23:03.196 00:23:03.196 --- 10.0.0.2 ping statistics --- 00:23:03.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.196 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:23:03.196 00:23:03.196 --- 10.0.0.1 ping statistics --- 00:23:03.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.196 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2164742 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2164742 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2164742 ']' 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:03.196 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.457 [2024-06-10 11:31:32.178133] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:23:03.457 [2024-06-10 11:31:32.178198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.457 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.457 [2024-06-10 11:31:32.247856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.457 [2024-06-10 11:31:32.321565] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.457 [2024-06-10 11:31:32.321603] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.457 [2024-06-10 11:31:32.321611] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.457 [2024-06-10 11:31:32.321621] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.457 [2024-06-10 11:31:32.321627] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.457 [2024-06-10 11:31:32.321748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.457 [2024-06-10 11:31:32.321900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.457 [2024-06-10 11:31:32.321901] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.457 [2024-06-10 11:31:32.321778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.457 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:03.457 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:23:03.457 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.457 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:03.457 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.718 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.719 [2024-06-10 11:31:32.468540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:03.719 [2024-06-10 11:31:32.527960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:23:03.719 11:31:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:23:07.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:11.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:14.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:18.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.044 rmmod nvme_tcp 00:23:22.044 rmmod nvme_fabrics 00:23:22.044 rmmod nvme_keyring 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2164742 ']' 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2164742 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2164742 ']' 00:23:22.044 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2164742 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2164742 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2164742' 00:23:22.045 killing process with pid 2164742 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2164742 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2164742 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.045 11:31:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.076 11:31:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.076 00:23:24.076 real 0m28.105s 00:23:24.076 user 1m15.525s 00:23:24.076 sys 0m6.455s 00:23:24.076 11:31:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.076 11:31:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:24.076 ************************************ 00:23:24.076 END TEST nvmf_connect_disconnect 00:23:24.076 ************************************ 00:23:24.076 11:31:52 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:23:24.076 11:31:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:24.076 11:31:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:24.076 11:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.076 ************************************ 00:23:24.076 START TEST nvmf_multitarget 00:23:24.076 ************************************ 00:23:24.076 11:31:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:23:24.337 * Looking for test storage... 00:23:24.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.337 11:31:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:30.928 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:30.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:30.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:30.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.928 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.929 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:31.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:23:31.190 00:23:31.190 --- 10.0.0.2 ping statistics --- 00:23:31.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.190 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:23:31.190 00:23:31.190 --- 10.0.0.1 ping statistics --- 00:23:31.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.190 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:31.190 11:31:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2172756 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2172756 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2172756 ']' 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:31.190 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:31.190 [2024-06-10 11:32:00.090980] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:23:31.190 [2024-06-10 11:32:00.091051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.190 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.453 [2024-06-10 11:32:00.164586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.453 [2024-06-10 11:32:00.240214] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.453 [2024-06-10 11:32:00.240254] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.453 [2024-06-10 11:32:00.240261] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.453 [2024-06-10 11:32:00.240268] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.453 [2024-06-10 11:32:00.240273] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.453 [2024-06-10 11:32:00.240400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.453 [2024-06-10 11:32:00.240518] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.453 [2024-06-10 11:32:00.240678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.453 [2024-06-10 11:32:00.240682] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.026 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:32.026 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:23:32.026 11:32:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.026 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:32.026 11:32:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:23:32.289 "nvmf_tgt_1" 00:23:32.289 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:23:32.553 "nvmf_tgt_2" 00:23:32.553 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:32.553 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:23:32.553 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:23:32.553 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:23:32.814 true 00:23:32.814 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:23:32.814 true 00:23:32.814 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:23:32.814 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.075 rmmod nvme_tcp 00:23:33.075 rmmod nvme_fabrics 00:23:33.075 rmmod nvme_keyring 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2172756 ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2172756 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2172756 ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2172756 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2172756 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2172756' 00:23:33.075 killing process with pid 2172756 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2172756 00:23:33.075 11:32:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2172756 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.336 11:32:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.251 11:32:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.251 00:23:35.251 real 0m11.213s 00:23:35.251 user 0m10.389s 00:23:35.251 sys 0m5.533s 00:23:35.251 11:32:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:35.251 11:32:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:23:35.251 ************************************ 00:23:35.251 END TEST nvmf_multitarget 00:23:35.251 ************************************ 00:23:35.251 11:32:04 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:23:35.251 11:32:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:35.251 11:32:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:35.251 11:32:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.512 ************************************ 00:23:35.512 START TEST nvmf_rpc 00:23:35.512 ************************************ 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:23:35.512 * Looking for test storage... 00:23:35.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.512 11:32:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.513 11:32:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:43.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:43.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:43.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:43.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:23:43.663 00:23:43.663 --- 10.0.0.2 ping statistics --- 00:23:43.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.663 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:23:43.663 00:23:43.663 --- 10.0.0.1 ping statistics --- 00:23:43.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.663 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2177216 00:23:43.663 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2177216 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2177216 ']' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 [2024-06-10 11:32:11.492521] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:23:43.664 [2024-06-10 11:32:11.492572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.664 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.664 [2024-06-10 11:32:11.559535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.664 [2024-06-10 11:32:11.625215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.664 [2024-06-10 11:32:11.625248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.664 [2024-06-10 11:32:11.625255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.664 [2024-06-10 11:32:11.625262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.664 [2024-06-10 11:32:11.625269] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.664 [2024-06-10 11:32:11.625371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.664 [2024-06-10 11:32:11.625487] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.664 [2024-06-10 11:32:11.625616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.664 [2024-06-10 11:32:11.625617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:23:43.664 "tick_rate": 2400000000, 00:23:43.664 "poll_groups": [ 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_000", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_001", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_002", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_003", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [] 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 }' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 [2024-06-10 11:32:11.884906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:23:43.664 "tick_rate": 2400000000, 00:23:43.664 "poll_groups": [ 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_000", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [ 00:23:43.664 { 00:23:43.664 "trtype": "TCP" 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_001", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [ 00:23:43.664 { 00:23:43.664 "trtype": "TCP" 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_002", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [ 00:23:43.664 { 00:23:43.664 "trtype": "TCP" 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 }, 00:23:43.664 { 00:23:43.664 "name": "nvmf_tgt_poll_group_003", 00:23:43.664 "admin_qpairs": 0, 00:23:43.664 "io_qpairs": 0, 00:23:43.664 "current_admin_qpairs": 0, 00:23:43.664 "current_io_qpairs": 0, 00:23:43.664 "pending_bdev_io": 0, 00:23:43.664 "completed_nvme_io": 0, 00:23:43.664 "transports": [ 00:23:43.664 { 00:23:43.664 "trtype": "TCP" 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 } 00:23:43.664 ] 00:23:43.664 }' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:23:43.664 11:32:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 Malloc1 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 [2024-06-10 11:32:12.076772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:23:43.664 [2024-06-10 11:32:12.103527] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:23:43.664 Failed to write to /dev/nvme-fabrics: Input/output error 00:23:43.664 could not add new controller: failed to write to nvme-fabrics device 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.664 11:32:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:45.050 11:32:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:23:45.050 11:32:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:23:45.050 11:32:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.050 11:32:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:45.050 11:32:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:46.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:46.968 [2024-06-10 11:32:15.810039] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:23:46.968 Failed to write to /dev/nvme-fabrics: Input/output error 00:23:46.968 could not add new controller: failed to write to nvme-fabrics device 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.968 11:32:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:48.356 11:32:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:23:48.356 11:32:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:23:48.356 11:32:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:48.356 11:32:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:48.356 11:32:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:50.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:50.904 [2024-06-10 11:32:19.455462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.904 11:32:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:52.289 11:32:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:52.289 11:32:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:23:52.289 11:32:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:52.289 11:32:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:52.289 11:32:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:54.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.202 [2024-06-10 11:32:23.164831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.202 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.496 11:32:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:55.908 11:32:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:55.908 11:32:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:23:55.908 11:32:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:55.908 11:32:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:55.908 11:32:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:23:57.826 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:57.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 [2024-06-10 11:32:26.869305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:58.087 11:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.088 11:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:59.473 11:32:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:23:59.473 11:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:23:59.473 11:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.473 11:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:59.473 11:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:02.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 [2024-06-10 11:32:30.594036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.021 11:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:03.407 11:32:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:03.407 11:32:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:24:03.407 11:32:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:03.407 11:32:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:03.407 11:32:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:05.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 [2024-06-10 11:32:34.229906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:05.323 11:32:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:07.238 11:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:24:07.238 11:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:24:07.238 11:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.238 11:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:24:07.238 11:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:09.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 [2024-06-10 11:32:37.896064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 [2024-06-10 11:32:37.956176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.152 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 [2024-06-10 11:32:38.020374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 [2024-06-10 11:32:38.076562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.153 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 [2024-06-10 11:32:38.136776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:24:09.415 "tick_rate": 2400000000, 00:24:09.415 "poll_groups": [ 00:24:09.415 { 00:24:09.415 "name": "nvmf_tgt_poll_group_000", 00:24:09.415 "admin_qpairs": 0, 00:24:09.415 "io_qpairs": 224, 00:24:09.415 "current_admin_qpairs": 0, 00:24:09.415 "current_io_qpairs": 0, 00:24:09.415 "pending_bdev_io": 0, 00:24:09.415 "completed_nvme_io": 225, 00:24:09.415 "transports": [ 00:24:09.415 { 00:24:09.415 "trtype": "TCP" 00:24:09.415 } 00:24:09.415 ] 00:24:09.415 }, 00:24:09.415 { 00:24:09.415 "name": "nvmf_tgt_poll_group_001", 00:24:09.415 "admin_qpairs": 1, 00:24:09.415 "io_qpairs": 223, 00:24:09.415 "current_admin_qpairs": 0, 00:24:09.415 "current_io_qpairs": 0, 00:24:09.415 "pending_bdev_io": 0, 00:24:09.415 "completed_nvme_io": 278, 00:24:09.415 "transports": [ 00:24:09.415 { 00:24:09.415 "trtype": "TCP" 00:24:09.415 } 00:24:09.415 ] 00:24:09.415 }, 00:24:09.415 { 00:24:09.415 "name": "nvmf_tgt_poll_group_002", 00:24:09.415 "admin_qpairs": 6, 00:24:09.415 "io_qpairs": 218, 00:24:09.415 "current_admin_qpairs": 0, 00:24:09.415 "current_io_qpairs": 0, 00:24:09.415 "pending_bdev_io": 0, 00:24:09.415 "completed_nvme_io": 512, 00:24:09.415 "transports": [ 00:24:09.415 { 00:24:09.415 "trtype": "TCP" 00:24:09.415 } 00:24:09.415 ] 00:24:09.415 }, 00:24:09.415 { 00:24:09.415 "name": "nvmf_tgt_poll_group_003", 00:24:09.415 "admin_qpairs": 0, 00:24:09.415 "io_qpairs": 224, 00:24:09.415 "current_admin_qpairs": 0, 00:24:09.415 "current_io_qpairs": 0, 00:24:09.415 "pending_bdev_io": 0, 00:24:09.415 "completed_nvme_io": 224, 00:24:09.415 "transports": [ 00:24:09.415 { 00:24:09.415 "trtype": "TCP" 00:24:09.415 } 00:24:09.415 ] 00:24:09.415 } 00:24:09.415 ] 00:24:09.415 }' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.415 rmmod nvme_tcp 00:24:09.415 rmmod nvme_fabrics 00:24:09.415 rmmod nvme_keyring 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2177216 ']' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2177216 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2177216 ']' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2177216 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:09.415 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2177216 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2177216' 00:24:09.677 killing process with pid 2177216 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2177216 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2177216 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.677 11:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.225 11:32:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.225 00:24:12.225 real 0m36.377s 00:24:12.225 user 1m49.477s 00:24:12.225 sys 0m6.809s 00:24:12.225 11:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:12.225 11:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:12.225 ************************************ 00:24:12.225 END TEST nvmf_rpc 00:24:12.225 ************************************ 00:24:12.225 11:32:40 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:24:12.225 11:32:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:12.225 11:32:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:12.225 11:32:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.225 ************************************ 00:24:12.225 START TEST nvmf_invalid 00:24:12.225 ************************************ 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:24:12.225 * Looking for test storage... 00:24:12.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.225 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.226 11:32:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.818 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:18.819 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:18.819 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:18.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:18.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:24:18.819 00:24:18.819 --- 10.0.0.2 ping statistics --- 00:24:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.819 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:24:18.819 00:24:18.819 --- 10.0.0.1 ping statistics --- 00:24:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.819 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2186757 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2186757 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2186757 ']' 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.819 [2024-06-10 11:32:47.501227] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:24:18.819 [2024-06-10 11:32:47.501276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.819 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.819 [2024-06-10 11:32:47.565926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.819 [2024-06-10 11:32:47.633504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.819 [2024-06-10 11:32:47.633540] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.819 [2024-06-10 11:32:47.633549] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.819 [2024-06-10 11:32:47.633557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.819 [2024-06-10 11:32:47.633563] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.819 [2024-06-10 11:32:47.633662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.819 [2024-06-10 11:32:47.633805] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.819 [2024-06-10 11:32:47.634024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.819 [2024-06-10 11:32:47.634025] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:18.819 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11179 00:24:19.081 [2024-06-10 11:32:47.917911] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:24:19.081 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:24:19.081 { 00:24:19.081 "nqn": "nqn.2016-06.io.spdk:cnode11179", 00:24:19.081 "tgt_name": "foobar", 00:24:19.081 "method": "nvmf_create_subsystem", 00:24:19.081 "req_id": 1 00:24:19.081 } 00:24:19.081 Got JSON-RPC error response 00:24:19.081 response: 00:24:19.081 { 00:24:19.081 "code": -32603, 00:24:19.081 "message": "Unable to find target foobar" 00:24:19.081 }' 00:24:19.081 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:24:19.081 { 00:24:19.081 "nqn": "nqn.2016-06.io.spdk:cnode11179", 00:24:19.081 "tgt_name": "foobar", 00:24:19.081 "method": "nvmf_create_subsystem", 00:24:19.081 "req_id": 1 00:24:19.081 } 00:24:19.081 Got JSON-RPC error response 00:24:19.081 response: 00:24:19.081 { 00:24:19.081 "code": -32603, 00:24:19.081 "message": "Unable to find target foobar" 00:24:19.081 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:24:19.081 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:24:19.081 11:32:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25164 00:24:19.341 [2024-06-10 11:32:48.142642] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25164: invalid serial number 'SPDKISFASTANDAWESOME' 00:24:19.341 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:24:19.341 { 00:24:19.341 "nqn": "nqn.2016-06.io.spdk:cnode25164", 00:24:19.341 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:24:19.341 "method": "nvmf_create_subsystem", 00:24:19.341 "req_id": 1 00:24:19.341 } 00:24:19.341 Got JSON-RPC error response 00:24:19.341 response: 00:24:19.341 { 00:24:19.342 "code": -32602, 00:24:19.342 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:24:19.342 }' 00:24:19.342 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:24:19.342 { 00:24:19.342 "nqn": "nqn.2016-06.io.spdk:cnode25164", 00:24:19.342 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:24:19.342 "method": "nvmf_create_subsystem", 00:24:19.342 "req_id": 1 00:24:19.342 } 00:24:19.342 Got JSON-RPC error response 00:24:19.342 response: 00:24:19.342 { 00:24:19.342 "code": -32602, 00:24:19.342 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:24:19.342 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:24:19.342 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:24:19.342 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6108 00:24:19.603 [2024-06-10 11:32:48.367383] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6108: invalid model number 'SPDK_Controller' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:24:19.603 { 00:24:19.603 "nqn": "nqn.2016-06.io.spdk:cnode6108", 00:24:19.603 "model_number": "SPDK_Controller\u001f", 00:24:19.603 "method": "nvmf_create_subsystem", 00:24:19.603 "req_id": 1 00:24:19.603 } 00:24:19.603 Got JSON-RPC error response 00:24:19.603 response: 00:24:19.603 { 00:24:19.603 "code": -32602, 00:24:19.603 "message": "Invalid MN SPDK_Controller\u001f" 00:24:19.603 }' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:24:19.603 { 00:24:19.603 "nqn": "nqn.2016-06.io.spdk:cnode6108", 00:24:19.603 "model_number": "SPDK_Controller\u001f", 00:24:19.603 "method": "nvmf_create_subsystem", 00:24:19.603 "req_id": 1 00:24:19.603 } 00:24:19.603 Got JSON-RPC error response 00:24:19.603 response: 00:24:19.603 { 00:24:19.603 "code": -32602, 00:24:19.603 "message": "Invalid MN SPDK_Controller\u001f" 00:24:19.603 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:24:19.603 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'o.Uz!\]S"Q"TF7v&tmN50' 00:24:19.604 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'o.Uz!\]S"Q"TF7v&tmN50' nqn.2016-06.io.spdk:cnode18805 00:24:19.866 [2024-06-10 11:32:48.752694] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18805: invalid serial number 'o.Uz!\]S"Q"TF7v&tmN50' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:24:19.866 { 00:24:19.866 "nqn": "nqn.2016-06.io.spdk:cnode18805", 00:24:19.866 "serial_number": "o.Uz!\\]S\"Q\"TF7v&tmN50", 00:24:19.866 "method": "nvmf_create_subsystem", 00:24:19.866 "req_id": 1 00:24:19.866 } 00:24:19.866 Got JSON-RPC error response 00:24:19.866 response: 00:24:19.866 { 00:24:19.866 "code": -32602, 00:24:19.866 "message": "Invalid SN o.Uz!\\]S\"Q\"TF7v&tmN50" 00:24:19.866 }' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:24:19.866 { 00:24:19.866 "nqn": "nqn.2016-06.io.spdk:cnode18805", 00:24:19.866 "serial_number": "o.Uz!\\]S\"Q\"TF7v&tmN50", 00:24:19.866 "method": "nvmf_create_subsystem", 00:24:19.866 "req_id": 1 00:24:19.866 } 00:24:19.866 Got JSON-RPC error response 00:24:19.866 response: 00:24:19.866 { 00:24:19.866 "code": -32602, 00:24:19.866 "message": "Invalid SN o.Uz!\\]S\"Q\"TF7v&tmN50" 00:24:19.866 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:19.866 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:24:20.127 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:24:20.128 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '27]IdlE..z:-,G.C\F]z5<3<@^Rvga"[TQj&i5<*' 00:24:20.129 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '27]IdlE..z:-,G.C\F]z5<3<@^Rvga"[TQj&i5<*' nqn.2016-06.io.spdk:cnode26941 00:24:20.390 [2024-06-10 11:32:49.282412] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26941: invalid model number '27]IdlE..z:-,G.C\F]z5<3<@^Rvga"[TQj&i5<*' 00:24:20.390 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:24:20.390 { 00:24:20.390 "nqn": "nqn.2016-06.io.spdk:cnode26941", 00:24:20.390 "model_number": "27\u007f]IdlE..z:-,G.C\\F]z5<3<@^Rvga\"[TQj&i5<*", 00:24:20.390 "method": "nvmf_create_subsystem", 00:24:20.390 "req_id": 1 00:24:20.390 } 00:24:20.390 Got JSON-RPC error response 00:24:20.390 response: 00:24:20.390 { 00:24:20.390 "code": -32602, 00:24:20.390 "message": "Invalid MN 27\u007f]IdlE..z:-,G.C\\F]z5<3<@^Rvga\"[TQj&i5<*" 00:24:20.390 }' 00:24:20.390 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:24:20.390 { 00:24:20.390 "nqn": "nqn.2016-06.io.spdk:cnode26941", 00:24:20.390 "model_number": "27\u007f]IdlE..z:-,G.C\\F]z5<3<@^Rvga\"[TQj&i5<*", 00:24:20.390 "method": "nvmf_create_subsystem", 00:24:20.390 "req_id": 1 00:24:20.390 } 00:24:20.390 Got JSON-RPC error response 00:24:20.390 response: 00:24:20.390 { 00:24:20.390 "code": -32602, 00:24:20.390 "message": "Invalid MN 27\u007f]IdlE..z:-,G.C\\F]z5<3<@^Rvga\"[TQj&i5<*" 00:24:20.390 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:24:20.390 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:24:20.651 [2024-06-10 11:32:49.503214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.651 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:24:20.912 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:24:20.912 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:24:20.912 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:24:20.912 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:24:20.912 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:24:21.173 [2024-06-10 11:32:49.952604] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:24:21.173 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:24:21.173 { 00:24:21.173 "nqn": "nqn.2016-06.io.spdk:cnode", 00:24:21.173 "listen_address": { 00:24:21.173 "trtype": "tcp", 00:24:21.173 "traddr": "", 00:24:21.173 "trsvcid": "4421" 00:24:21.173 }, 00:24:21.174 "method": "nvmf_subsystem_remove_listener", 00:24:21.174 "req_id": 1 00:24:21.174 } 00:24:21.174 Got JSON-RPC error response 00:24:21.174 response: 00:24:21.174 { 00:24:21.174 "code": -32602, 00:24:21.174 "message": "Invalid parameters" 00:24:21.174 }' 00:24:21.174 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:24:21.174 { 00:24:21.174 "nqn": "nqn.2016-06.io.spdk:cnode", 00:24:21.174 "listen_address": { 00:24:21.174 "trtype": "tcp", 00:24:21.174 "traddr": "", 00:24:21.174 "trsvcid": "4421" 00:24:21.174 }, 00:24:21.174 "method": "nvmf_subsystem_remove_listener", 00:24:21.174 "req_id": 1 00:24:21.174 } 00:24:21.174 Got JSON-RPC error response 00:24:21.174 response: 00:24:21.174 { 00:24:21.174 "code": -32602, 00:24:21.174 "message": "Invalid parameters" 00:24:21.174 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:24:21.174 11:32:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15454 -i 0 00:24:21.435 [2024-06-10 11:32:50.169298] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15454: invalid cntlid range [0-65519] 00:24:21.435 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:24:21.435 { 00:24:21.435 "nqn": "nqn.2016-06.io.spdk:cnode15454", 00:24:21.435 "min_cntlid": 0, 00:24:21.435 "method": "nvmf_create_subsystem", 00:24:21.435 "req_id": 1 00:24:21.435 } 00:24:21.435 Got JSON-RPC error response 00:24:21.435 response: 00:24:21.435 { 00:24:21.435 "code": -32602, 00:24:21.435 "message": "Invalid cntlid range [0-65519]" 00:24:21.435 }' 00:24:21.435 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:24:21.435 { 00:24:21.435 "nqn": "nqn.2016-06.io.spdk:cnode15454", 00:24:21.435 "min_cntlid": 0, 00:24:21.435 "method": "nvmf_create_subsystem", 00:24:21.435 "req_id": 1 00:24:21.435 } 00:24:21.435 Got JSON-RPC error response 00:24:21.435 response: 00:24:21.435 { 00:24:21.435 "code": -32602, 00:24:21.435 "message": "Invalid cntlid range [0-65519]" 00:24:21.435 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:21.435 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20419 -i 65520 00:24:21.435 [2024-06-10 11:32:50.394020] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20419: invalid cntlid range [65520-65519] 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:24:21.695 { 00:24:21.695 "nqn": "nqn.2016-06.io.spdk:cnode20419", 00:24:21.695 "min_cntlid": 65520, 00:24:21.695 "method": "nvmf_create_subsystem", 00:24:21.695 "req_id": 1 00:24:21.695 } 00:24:21.695 Got JSON-RPC error response 00:24:21.695 response: 00:24:21.695 { 00:24:21.695 "code": -32602, 00:24:21.695 "message": "Invalid cntlid range [65520-65519]" 00:24:21.695 }' 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:24:21.695 { 00:24:21.695 "nqn": "nqn.2016-06.io.spdk:cnode20419", 00:24:21.695 "min_cntlid": 65520, 00:24:21.695 "method": "nvmf_create_subsystem", 00:24:21.695 "req_id": 1 00:24:21.695 } 00:24:21.695 Got JSON-RPC error response 00:24:21.695 response: 00:24:21.695 { 00:24:21.695 "code": -32602, 00:24:21.695 "message": "Invalid cntlid range [65520-65519]" 00:24:21.695 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4113 -I 0 00:24:21.695 [2024-06-10 11:32:50.614770] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4113: invalid cntlid range [1-0] 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:24:21.695 { 00:24:21.695 "nqn": "nqn.2016-06.io.spdk:cnode4113", 00:24:21.695 "max_cntlid": 0, 00:24:21.695 "method": "nvmf_create_subsystem", 00:24:21.695 "req_id": 1 00:24:21.695 } 00:24:21.695 Got JSON-RPC error response 00:24:21.695 response: 00:24:21.695 { 00:24:21.695 "code": -32602, 00:24:21.695 "message": "Invalid cntlid range [1-0]" 00:24:21.695 }' 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:24:21.695 { 00:24:21.695 "nqn": "nqn.2016-06.io.spdk:cnode4113", 00:24:21.695 "max_cntlid": 0, 00:24:21.695 "method": "nvmf_create_subsystem", 00:24:21.695 "req_id": 1 00:24:21.695 } 00:24:21.695 Got JSON-RPC error response 00:24:21.695 response: 00:24:21.695 { 00:24:21.695 "code": -32602, 00:24:21.695 "message": "Invalid cntlid range [1-0]" 00:24:21.695 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:21.695 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2430 -I 65520 00:24:21.954 [2024-06-10 11:32:50.835516] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2430: invalid cntlid range [1-65520] 00:24:21.954 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:24:21.954 { 00:24:21.954 "nqn": "nqn.2016-06.io.spdk:cnode2430", 00:24:21.954 "max_cntlid": 65520, 00:24:21.954 "method": "nvmf_create_subsystem", 00:24:21.954 "req_id": 1 00:24:21.954 } 00:24:21.954 Got JSON-RPC error response 00:24:21.954 response: 00:24:21.954 { 00:24:21.954 "code": -32602, 00:24:21.954 "message": "Invalid cntlid range [1-65520]" 00:24:21.954 }' 00:24:21.954 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:24:21.954 { 00:24:21.954 "nqn": "nqn.2016-06.io.spdk:cnode2430", 00:24:21.954 "max_cntlid": 65520, 00:24:21.954 "method": "nvmf_create_subsystem", 00:24:21.954 "req_id": 1 00:24:21.954 } 00:24:21.954 Got JSON-RPC error response 00:24:21.954 response: 00:24:21.954 { 00:24:21.954 "code": -32602, 00:24:21.954 "message": "Invalid cntlid range [1-65520]" 00:24:21.954 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:21.954 11:32:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30511 -i 6 -I 5 00:24:22.214 [2024-06-10 11:32:51.052179] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30511: invalid cntlid range [6-5] 00:24:22.214 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:24:22.214 { 00:24:22.214 "nqn": "nqn.2016-06.io.spdk:cnode30511", 00:24:22.214 "min_cntlid": 6, 00:24:22.214 "max_cntlid": 5, 00:24:22.214 "method": "nvmf_create_subsystem", 00:24:22.214 "req_id": 1 00:24:22.214 } 00:24:22.214 Got JSON-RPC error response 00:24:22.214 response: 00:24:22.214 { 00:24:22.214 "code": -32602, 00:24:22.214 "message": "Invalid cntlid range [6-5]" 00:24:22.214 }' 00:24:22.214 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:24:22.214 { 00:24:22.214 "nqn": "nqn.2016-06.io.spdk:cnode30511", 00:24:22.214 "min_cntlid": 6, 00:24:22.214 "max_cntlid": 5, 00:24:22.214 "method": "nvmf_create_subsystem", 00:24:22.214 "req_id": 1 00:24:22.214 } 00:24:22.214 Got JSON-RPC error response 00:24:22.214 response: 00:24:22.214 { 00:24:22.214 "code": -32602, 00:24:22.214 "message": "Invalid cntlid range [6-5]" 00:24:22.214 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:24:22.214 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:24:22.214 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:24:22.214 { 00:24:22.214 "name": "foobar", 00:24:22.214 "method": "nvmf_delete_target", 00:24:22.214 "req_id": 1 00:24:22.214 } 00:24:22.214 Got JSON-RPC error response 00:24:22.214 response: 00:24:22.214 { 00:24:22.214 "code": -32602, 00:24:22.214 "message": "The specified target doesn'\''t exist, cannot delete it." 00:24:22.214 }' 00:24:22.214 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:24:22.214 { 00:24:22.214 "name": "foobar", 00:24:22.214 "method": "nvmf_delete_target", 00:24:22.214 "req_id": 1 00:24:22.214 } 00:24:22.214 Got JSON-RPC error response 00:24:22.214 response: 00:24:22.214 { 00:24:22.214 "code": -32602, 00:24:22.214 "message": "The specified target doesn't exist, cannot delete it." 00:24:22.214 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.523 rmmod nvme_tcp 00:24:22.523 rmmod nvme_fabrics 00:24:22.523 rmmod nvme_keyring 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2186757 ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 2186757 ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2186757' 00:24:22.523 killing process with pid 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 2186757 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.523 11:32:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.078 11:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.078 00:24:25.078 real 0m12.808s 00:24:25.078 user 0m19.598s 00:24:25.078 sys 0m5.898s 00:24:25.078 11:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:25.078 11:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:24:25.078 ************************************ 00:24:25.078 END TEST nvmf_invalid 00:24:25.078 ************************************ 00:24:25.078 11:32:53 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:24:25.078 11:32:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:25.078 11:32:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:25.078 11:32:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.078 ************************************ 00:24:25.078 START TEST nvmf_abort 00:24:25.078 ************************************ 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:24:25.078 * Looking for test storage... 00:24:25.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.078 11:32:53 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.079 11:32:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.663 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:24:31.663 00:24:31.663 --- 10.0.0.2 ping statistics --- 00:24:31.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.663 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:24:31.664 00:24:31.664 --- 10.0.0.1 ping statistics --- 00:24:31.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.664 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2191956 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2191956 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2191956 ']' 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:31.664 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.664 [2024-06-10 11:33:00.577208] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:24:31.664 [2024-06-10 11:33:00.577263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.664 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.925 [2024-06-10 11:33:00.642210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.925 [2024-06-10 11:33:00.708365] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.925 [2024-06-10 11:33:00.708401] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.925 [2024-06-10 11:33:00.708409] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.925 [2024-06-10 11:33:00.708416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.925 [2024-06-10 11:33:00.708422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.925 [2024-06-10 11:33:00.708525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.925 [2024-06-10 11:33:00.708699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.925 [2024-06-10 11:33:00.708719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.925 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.926 [2024-06-10 11:33:00.846577] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:31.926 Malloc0 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.926 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:32.187 Delay0 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:32.187 [2024-06-10 11:33:00.937135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.187 11:33:00 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:24:32.187 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.187 [2024-06-10 11:33:01.048936] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:24:34.733 Initializing NVMe Controllers 00:24:34.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:24:34.733 controller IO queue size 128 less than required 00:24:34.733 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:24:34.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:34.733 Initialization complete. Launching workers. 00:24:34.733 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 33589 00:24:34.733 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33651, failed to submit 62 00:24:34.733 success 33593, unsuccess 58, failed 0 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.733 rmmod nvme_tcp 00:24:34.733 rmmod nvme_fabrics 00:24:34.733 rmmod nvme_keyring 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2191956 ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2191956 ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2191956' 00:24:34.733 killing process with pid 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2191956 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.733 11:33:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.648 11:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.648 00:24:36.648 real 0m11.881s 00:24:36.648 user 0m11.678s 00:24:36.648 sys 0m5.832s 00:24:36.648 11:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:36.648 11:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:24:36.648 ************************************ 00:24:36.648 END TEST nvmf_abort 00:24:36.648 ************************************ 00:24:36.649 11:33:05 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:24:36.649 11:33:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:36.649 11:33:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:36.649 11:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.649 ************************************ 00:24:36.649 START TEST nvmf_ns_hotplug_stress 00:24:36.649 ************************************ 00:24:36.649 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:24:36.911 * Looking for test storage... 00:24:36.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.911 11:33:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:45.057 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:45.057 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:45.057 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.057 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:45.058 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:24:45.058 00:24:45.058 --- 10.0.0.2 ping statistics --- 00:24:45.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.058 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:24:45.058 00:24:45.058 --- 10.0.0.1 ping statistics --- 00:24:45.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.058 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2197179 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2197179 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2197179 ']' 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:45.058 11:33:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:45.058 [2024-06-10 11:33:12.969402] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:24:45.058 [2024-06-10 11:33:12.969459] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.058 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.058 [2024-06-10 11:33:13.037629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.058 [2024-06-10 11:33:13.102778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.058 [2024-06-10 11:33:13.102812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.058 [2024-06-10 11:33:13.102819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.058 [2024-06-10 11:33:13.102825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.058 [2024-06-10 11:33:13.102831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.058 [2024-06-10 11:33:13.102969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.058 [2024-06-10 11:33:13.103122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.058 [2024-06-10 11:33:13.103124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:24:45.058 11:33:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.319 [2024-06-10 11:33:14.043503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.319 11:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:45.319 11:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.579 [2024-06-10 11:33:14.477109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.579 11:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:45.840 11:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:24:46.101 Malloc0 00:24:46.101 11:33:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:46.361 Delay0 00:24:46.361 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:46.623 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:24:46.623 NULL1 00:24:46.623 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:24:46.883 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2197754 00:24:46.883 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:46.883 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:24:46.883 11:33:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:46.883 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.269 Read completed with error (sct=0, sc=11) 00:24:48.269 11:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:48.269 11:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:24:48.269 11:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:24:48.529 true 00:24:48.529 11:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:48.529 11:33:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:49.470 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:49.470 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:24:49.470 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:24:49.730 true 00:24:49.730 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:49.730 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:49.991 11:33:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:50.252 11:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:24:50.252 11:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:24:50.252 true 00:24:50.252 11:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:50.252 11:33:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:51.639 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:51.639 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:24:51.639 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:24:51.639 true 00:24:51.639 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:51.639 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:51.900 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:52.161 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:24:52.161 11:33:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:24:52.423 true 00:24:52.423 11:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:52.423 11:33:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:53.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.367 11:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:53.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:53.627 11:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:24:53.627 11:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:24:53.950 true 00:24:53.950 11:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:53.950 11:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:54.557 11:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:54.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:24:54.817 11:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:24:54.817 11:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:24:55.078 true 00:24:55.078 11:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:55.078 11:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:55.340 11:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:55.601 11:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:24:55.601 11:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:24:55.601 true 00:24:55.862 11:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:55.862 11:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:56.806 11:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:56.806 11:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:24:56.806 11:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:24:57.066 true 00:24:57.066 11:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:57.066 11:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:57.326 11:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:57.588 11:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:24:57.588 11:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:24:57.848 true 00:24:57.848 11:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:57.848 11:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:58.784 11:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:59.044 11:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:24:59.044 11:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:24:59.303 true 00:24:59.303 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:59.303 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:59.303 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:59.563 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:24:59.563 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:24:59.822 true 00:24:59.822 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:24:59.822 11:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:00.760 11:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:01.019 11:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:01.019 11:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:01.278 true 00:25:01.278 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:01.278 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:01.537 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:01.537 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:01.537 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:01.796 true 00:25:01.796 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:01.796 11:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:03.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.174 11:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:03.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:03.175 11:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:25:03.175 11:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:25:03.175 true 00:25:03.434 11:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:03.434 11:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:04.004 11:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:04.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:04.264 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:25:04.264 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:25:04.524 true 00:25:04.524 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:04.524 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:04.784 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:05.044 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:25:05.044 11:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:25:05.304 true 00:25:05.304 11:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:05.304 11:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.247 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:06.507 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:25:06.507 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:25:06.507 true 00:25:06.769 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:06.769 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:06.769 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:07.029 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:25:07.029 11:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:25:07.290 true 00:25:07.290 11:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:07.290 11:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:08.233 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:08.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:08.494 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:25:08.494 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:25:08.754 true 00:25:08.754 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:08.754 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:09.016 11:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:09.276 11:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:25:09.276 11:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:25:09.276 true 00:25:09.276 11:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:09.276 11:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:10.660 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:10.660 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:25:10.660 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:25:10.921 true 00:25:10.921 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:10.921 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:10.921 11:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:11.182 11:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:25:11.182 11:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:25:11.441 true 00:25:11.441 11:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:11.441 11:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:12.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:12.383 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:12.643 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:25:12.643 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:25:12.904 true 00:25:12.904 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:12.904 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:13.165 11:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:13.425 11:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:25:13.425 11:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:25:13.425 true 00:25:13.685 11:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:13.685 11:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:14.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:14.655 11:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:14.655 11:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:25:14.655 11:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:25:14.918 true 00:25:14.918 11:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:14.918 11:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:15.180 11:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:15.441 11:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:25:15.441 11:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:25:15.700 true 00:25:15.700 11:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:15.700 11:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:16.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:16.640 11:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:16.900 11:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:25:16.900 11:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:25:17.161 Initializing NVMe Controllers 00:25:17.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.161 Controller IO queue size 128, less than required. 00:25:17.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.161 Controller IO queue size 128, less than required. 00:25:17.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:17.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:17.161 Initialization complete. Launching workers. 00:25:17.161 ======================================================== 00:25:17.161 Latency(us) 00:25:17.161 Device Information : IOPS MiB/s Average min max 00:25:17.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 961.39 0.47 79230.62 2517.07 1124839.38 00:25:17.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17868.23 8.72 7163.19 1631.74 401129.62 00:25:17.161 ======================================================== 00:25:17.161 Total : 18829.62 9.19 10842.76 1631.74 1124839.38 00:25:17.161 00:25:17.161 true 00:25:17.161 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2197754 00:25:17.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2197754) - No such process 00:25:17.161 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2197754 00:25:17.161 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:17.422 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:25:17.683 null0 00:25:17.683 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:17.683 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:17.683 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:25:17.943 null1 00:25:17.943 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:17.943 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:17.943 11:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:25:18.203 null2 00:25:18.203 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:18.203 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:18.203 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:25:18.464 null3 00:25:18.464 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:18.464 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:18.464 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:25:18.464 null4 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:25:18.724 null5 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:18.724 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:25:18.985 null6 00:25:18.985 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:18.985 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:18.985 11:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:25:19.246 null7 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.246 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2204327 2204328 2204330 2204332 2204334 2204336 2204338 2204340 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.247 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:19.507 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:19.767 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:20.027 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:20.027 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:20.027 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:20.027 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.028 11:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:20.287 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:20.546 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:20.547 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:20.547 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:20.805 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:20.805 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:20.805 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:20.805 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:20.806 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:20.806 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:20.806 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:20.806 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:21.065 11:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:21.065 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:21.065 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:21.065 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:21.324 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.325 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:21.584 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:21.844 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.106 11:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.106 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:22.366 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.627 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:22.888 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:23.149 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:23.150 11:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:23.150 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:23.150 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.411 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:23.671 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:23.672 rmmod nvme_tcp 00:25:23.672 rmmod nvme_fabrics 00:25:23.672 rmmod nvme_keyring 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2197179 ']' 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2197179 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2197179 ']' 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2197179 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2197179 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2197179' 00:25:23.672 killing process with pid 2197179 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2197179 00:25:23.672 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2197179 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.933 11:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.845 11:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.846 00:25:25.846 real 0m49.210s 00:25:25.846 user 3m16.356s 00:25:25.846 sys 0m15.329s 00:25:25.846 11:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:25.846 11:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:25.846 ************************************ 00:25:25.846 END TEST nvmf_ns_hotplug_stress 00:25:25.846 ************************************ 00:25:25.846 11:33:54 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:25.846 11:33:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:25.846 11:33:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:25.846 11:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.107 ************************************ 00:25:26.107 START TEST nvmf_connect_stress 00:25:26.107 ************************************ 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:25:26.107 * Looking for test storage... 00:25:26.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.107 11:33:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.269 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.270 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.270 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.270 11:34:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:25:34.270 00:25:34.270 --- 10.0.0.2 ping statistics --- 00:25:34.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.270 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:25:34.270 00:25:34.270 --- 10.0.0.1 ping statistics --- 00:25:34.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.270 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2209491 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2209491 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2209491 ']' 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 [2024-06-10 11:34:02.188080] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:34.270 [2024-06-10 11:34:02.188133] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.270 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.270 [2024-06-10 11:34:02.254922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.270 [2024-06-10 11:34:02.320305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.270 [2024-06-10 11:34:02.320342] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.270 [2024-06-10 11:34:02.320349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.270 [2024-06-10 11:34:02.320356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.270 [2024-06-10 11:34:02.320361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.270 [2024-06-10 11:34:02.320473] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.270 [2024-06-10 11:34:02.320629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.270 [2024-06-10 11:34:02.320630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 [2024-06-10 11:34:02.454262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 [2024-06-10 11:34:02.487854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.270 NULL1 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2209514 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.270 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.271 11:34:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.531 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.531 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:34.531 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.531 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.531 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:34.792 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:34.792 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:34.792 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:34.792 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:34.792 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.052 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.052 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:35.052 11:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.052 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.052 11:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.311 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.311 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:35.311 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.311 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.311 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:35.881 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:35.881 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:35.881 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:35.881 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:35.881 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.142 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.142 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:36.142 11:34:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.142 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.142 11:34:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.402 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.402 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:36.402 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.402 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.402 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.663 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.663 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:36.663 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.663 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.663 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:36.924 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:36.924 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:36.924 11:34:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:36.924 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:36.924 11:34:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:37.494 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.494 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:37.494 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:37.494 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.494 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:37.754 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.754 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:37.754 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:37.754 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.754 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.015 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.015 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:38.015 11:34:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.015 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.015 11:34:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.275 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.275 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:38.275 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.275 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.275 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:38.537 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.537 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:38.537 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:38.537 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.537 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.144 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.144 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:39.144 11:34:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.144 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.144 11:34:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.422 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.422 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:39.422 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.422 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.422 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.684 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:39.684 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.684 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.684 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:39.945 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.945 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:39.945 11:34:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:39.945 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.945 11:34:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:40.206 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.206 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:40.206 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:40.206 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.206 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:40.778 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:40.778 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:40.778 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:40.778 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:40.778 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.038 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.038 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:41.038 11:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:41.038 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.038 11:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.298 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.298 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:41.298 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:41.299 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.299 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.560 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.560 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:41.560 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:41.560 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.560 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.821 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.821 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:41.821 11:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:41.821 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.821 11:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:42.397 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.397 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:42.397 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:42.397 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.397 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:42.660 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.660 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:42.660 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:42.660 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.660 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:42.920 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.920 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:42.920 11:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:42.920 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.920 11:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:43.180 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.180 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:43.180 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:43.180 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.180 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:43.441 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.441 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:43.441 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:25:43.441 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.441 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:43.701 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.961 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.961 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2209514 00:25:43.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2209514) - No such process 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2209514 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.962 rmmod nvme_tcp 00:25:43.962 rmmod nvme_fabrics 00:25:43.962 rmmod nvme_keyring 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2209491 ']' 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2209491 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2209491 ']' 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2209491 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2209491 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2209491' 00:25:43.962 killing process with pid 2209491 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2209491 00:25:43.962 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2209491 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.223 11:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.138 11:34:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.138 00:25:46.138 real 0m20.202s 00:25:46.138 user 0m40.283s 00:25:46.138 sys 0m8.698s 00:25:46.138 11:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:46.138 11:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:25:46.138 ************************************ 00:25:46.138 END TEST nvmf_connect_stress 00:25:46.138 ************************************ 00:25:46.138 11:34:15 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:25:46.138 11:34:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:46.138 11:34:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:46.138 11:34:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.138 ************************************ 00:25:46.138 START TEST nvmf_fused_ordering 00:25:46.138 ************************************ 00:25:46.138 11:34:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:25:46.400 * Looking for test storage... 00:25:46.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.400 11:34:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:52.992 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:52.993 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:52.993 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:52.993 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:52.993 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.993 11:34:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:25:53.257 00:25:53.257 --- 10.0.0.2 ping statistics --- 00:25:53.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.257 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:53.257 00:25:53.257 --- 10.0.0.1 ping statistics --- 00:25:53.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.257 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.257 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2215761 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2215761 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2215761 ']' 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:53.519 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.519 [2024-06-10 11:34:22.296000] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:53.519 [2024-06-10 11:34:22.296061] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.519 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.519 [2024-06-10 11:34:22.362087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.519 [2024-06-10 11:34:22.426912] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.519 [2024-06-10 11:34:22.426947] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.519 [2024-06-10 11:34:22.426955] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.519 [2024-06-10 11:34:22.426963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.519 [2024-06-10 11:34:22.426968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.519 [2024-06-10 11:34:22.426985] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 [2024-06-10 11:34:22.552198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 [2024-06-10 11:34:22.576399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 NULL1 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.781 11:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:53.781 [2024-06-10 11:34:22.639660] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:25:53.781 [2024-06-10 11:34:22.639715] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215892 ] 00:25:53.781 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.352 Attached to nqn.2016-06.io.spdk:cnode1 00:25:54.352 Namespace ID: 1 size: 1GB 00:25:54.352 fused_ordering(0) 00:25:54.352 fused_ordering(1) 00:25:54.352 fused_ordering(2) 00:25:54.352 fused_ordering(3) 00:25:54.352 fused_ordering(4) 00:25:54.352 fused_ordering(5) 00:25:54.352 fused_ordering(6) 00:25:54.352 fused_ordering(7) 00:25:54.352 fused_ordering(8) 00:25:54.352 fused_ordering(9) 00:25:54.352 fused_ordering(10) 00:25:54.352 fused_ordering(11) 00:25:54.352 fused_ordering(12) 00:25:54.352 fused_ordering(13) 00:25:54.352 fused_ordering(14) 00:25:54.352 fused_ordering(15) 00:25:54.352 fused_ordering(16) 00:25:54.352 fused_ordering(17) 00:25:54.352 fused_ordering(18) 00:25:54.352 fused_ordering(19) 00:25:54.352 fused_ordering(20) 00:25:54.352 fused_ordering(21) 00:25:54.352 fused_ordering(22) 00:25:54.352 fused_ordering(23) 00:25:54.352 fused_ordering(24) 00:25:54.352 fused_ordering(25) 00:25:54.352 fused_ordering(26) 00:25:54.352 fused_ordering(27) 00:25:54.352 fused_ordering(28) 00:25:54.352 fused_ordering(29) 00:25:54.352 fused_ordering(30) 00:25:54.352 fused_ordering(31) 00:25:54.352 fused_ordering(32) 00:25:54.352 fused_ordering(33) 00:25:54.352 fused_ordering(34) 00:25:54.352 fused_ordering(35) 00:25:54.352 fused_ordering(36) 00:25:54.352 fused_ordering(37) 00:25:54.352 fused_ordering(38) 00:25:54.352 fused_ordering(39) 00:25:54.352 fused_ordering(40) 00:25:54.352 fused_ordering(41) 00:25:54.352 fused_ordering(42) 00:25:54.352 fused_ordering(43) 00:25:54.352 fused_ordering(44) 00:25:54.352 fused_ordering(45) 00:25:54.352 fused_ordering(46) 00:25:54.352 fused_ordering(47) 00:25:54.352 fused_ordering(48) 00:25:54.352 fused_ordering(49) 00:25:54.352 fused_ordering(50) 00:25:54.352 fused_ordering(51) 00:25:54.352 fused_ordering(52) 00:25:54.352 fused_ordering(53) 00:25:54.352 fused_ordering(54) 00:25:54.352 fused_ordering(55) 00:25:54.352 fused_ordering(56) 00:25:54.352 fused_ordering(57) 00:25:54.352 fused_ordering(58) 00:25:54.352 fused_ordering(59) 00:25:54.352 fused_ordering(60) 00:25:54.352 fused_ordering(61) 00:25:54.352 fused_ordering(62) 00:25:54.352 fused_ordering(63) 00:25:54.352 fused_ordering(64) 00:25:54.352 fused_ordering(65) 00:25:54.352 fused_ordering(66) 00:25:54.352 fused_ordering(67) 00:25:54.352 fused_ordering(68) 00:25:54.352 fused_ordering(69) 00:25:54.352 fused_ordering(70) 00:25:54.352 fused_ordering(71) 00:25:54.352 fused_ordering(72) 00:25:54.352 fused_ordering(73) 00:25:54.352 fused_ordering(74) 00:25:54.352 fused_ordering(75) 00:25:54.352 fused_ordering(76) 00:25:54.352 fused_ordering(77) 00:25:54.352 fused_ordering(78) 00:25:54.352 fused_ordering(79) 00:25:54.352 fused_ordering(80) 00:25:54.352 fused_ordering(81) 00:25:54.352 fused_ordering(82) 00:25:54.352 fused_ordering(83) 00:25:54.352 fused_ordering(84) 00:25:54.352 fused_ordering(85) 00:25:54.352 fused_ordering(86) 00:25:54.352 fused_ordering(87) 00:25:54.352 fused_ordering(88) 00:25:54.352 fused_ordering(89) 00:25:54.352 fused_ordering(90) 00:25:54.352 fused_ordering(91) 00:25:54.352 fused_ordering(92) 00:25:54.352 fused_ordering(93) 00:25:54.352 fused_ordering(94) 00:25:54.352 fused_ordering(95) 00:25:54.352 fused_ordering(96) 00:25:54.352 fused_ordering(97) 00:25:54.352 fused_ordering(98) 00:25:54.352 fused_ordering(99) 00:25:54.352 fused_ordering(100) 00:25:54.352 fused_ordering(101) 00:25:54.352 fused_ordering(102) 00:25:54.352 fused_ordering(103) 00:25:54.352 fused_ordering(104) 00:25:54.352 fused_ordering(105) 00:25:54.352 fused_ordering(106) 00:25:54.353 fused_ordering(107) 00:25:54.353 fused_ordering(108) 00:25:54.353 fused_ordering(109) 00:25:54.353 fused_ordering(110) 00:25:54.353 fused_ordering(111) 00:25:54.353 fused_ordering(112) 00:25:54.353 fused_ordering(113) 00:25:54.353 fused_ordering(114) 00:25:54.353 fused_ordering(115) 00:25:54.353 fused_ordering(116) 00:25:54.353 fused_ordering(117) 00:25:54.353 fused_ordering(118) 00:25:54.353 fused_ordering(119) 00:25:54.353 fused_ordering(120) 00:25:54.353 fused_ordering(121) 00:25:54.353 fused_ordering(122) 00:25:54.353 fused_ordering(123) 00:25:54.353 fused_ordering(124) 00:25:54.353 fused_ordering(125) 00:25:54.353 fused_ordering(126) 00:25:54.353 fused_ordering(127) 00:25:54.353 fused_ordering(128) 00:25:54.353 fused_ordering(129) 00:25:54.353 fused_ordering(130) 00:25:54.353 fused_ordering(131) 00:25:54.353 fused_ordering(132) 00:25:54.353 fused_ordering(133) 00:25:54.353 fused_ordering(134) 00:25:54.353 fused_ordering(135) 00:25:54.353 fused_ordering(136) 00:25:54.353 fused_ordering(137) 00:25:54.353 fused_ordering(138) 00:25:54.353 fused_ordering(139) 00:25:54.353 fused_ordering(140) 00:25:54.353 fused_ordering(141) 00:25:54.353 fused_ordering(142) 00:25:54.353 fused_ordering(143) 00:25:54.353 fused_ordering(144) 00:25:54.353 fused_ordering(145) 00:25:54.353 fused_ordering(146) 00:25:54.353 fused_ordering(147) 00:25:54.353 fused_ordering(148) 00:25:54.353 fused_ordering(149) 00:25:54.353 fused_ordering(150) 00:25:54.353 fused_ordering(151) 00:25:54.353 fused_ordering(152) 00:25:54.353 fused_ordering(153) 00:25:54.353 fused_ordering(154) 00:25:54.353 fused_ordering(155) 00:25:54.353 fused_ordering(156) 00:25:54.353 fused_ordering(157) 00:25:54.353 fused_ordering(158) 00:25:54.353 fused_ordering(159) 00:25:54.353 fused_ordering(160) 00:25:54.353 fused_ordering(161) 00:25:54.353 fused_ordering(162) 00:25:54.353 fused_ordering(163) 00:25:54.353 fused_ordering(164) 00:25:54.353 fused_ordering(165) 00:25:54.353 fused_ordering(166) 00:25:54.353 fused_ordering(167) 00:25:54.353 fused_ordering(168) 00:25:54.353 fused_ordering(169) 00:25:54.353 fused_ordering(170) 00:25:54.353 fused_ordering(171) 00:25:54.353 fused_ordering(172) 00:25:54.353 fused_ordering(173) 00:25:54.353 fused_ordering(174) 00:25:54.353 fused_ordering(175) 00:25:54.353 fused_ordering(176) 00:25:54.353 fused_ordering(177) 00:25:54.353 fused_ordering(178) 00:25:54.353 fused_ordering(179) 00:25:54.353 fused_ordering(180) 00:25:54.353 fused_ordering(181) 00:25:54.353 fused_ordering(182) 00:25:54.353 fused_ordering(183) 00:25:54.353 fused_ordering(184) 00:25:54.353 fused_ordering(185) 00:25:54.353 fused_ordering(186) 00:25:54.353 fused_ordering(187) 00:25:54.353 fused_ordering(188) 00:25:54.353 fused_ordering(189) 00:25:54.353 fused_ordering(190) 00:25:54.353 fused_ordering(191) 00:25:54.353 fused_ordering(192) 00:25:54.353 fused_ordering(193) 00:25:54.353 fused_ordering(194) 00:25:54.353 fused_ordering(195) 00:25:54.353 fused_ordering(196) 00:25:54.353 fused_ordering(197) 00:25:54.353 fused_ordering(198) 00:25:54.353 fused_ordering(199) 00:25:54.353 fused_ordering(200) 00:25:54.353 fused_ordering(201) 00:25:54.353 fused_ordering(202) 00:25:54.353 fused_ordering(203) 00:25:54.353 fused_ordering(204) 00:25:54.353 fused_ordering(205) 00:25:54.614 fused_ordering(206) 00:25:54.614 fused_ordering(207) 00:25:54.614 fused_ordering(208) 00:25:54.614 fused_ordering(209) 00:25:54.614 fused_ordering(210) 00:25:54.614 fused_ordering(211) 00:25:54.614 fused_ordering(212) 00:25:54.614 fused_ordering(213) 00:25:54.614 fused_ordering(214) 00:25:54.614 fused_ordering(215) 00:25:54.614 fused_ordering(216) 00:25:54.614 fused_ordering(217) 00:25:54.614 fused_ordering(218) 00:25:54.614 fused_ordering(219) 00:25:54.614 fused_ordering(220) 00:25:54.614 fused_ordering(221) 00:25:54.614 fused_ordering(222) 00:25:54.614 fused_ordering(223) 00:25:54.614 fused_ordering(224) 00:25:54.614 fused_ordering(225) 00:25:54.614 fused_ordering(226) 00:25:54.614 fused_ordering(227) 00:25:54.614 fused_ordering(228) 00:25:54.614 fused_ordering(229) 00:25:54.614 fused_ordering(230) 00:25:54.614 fused_ordering(231) 00:25:54.614 fused_ordering(232) 00:25:54.614 fused_ordering(233) 00:25:54.614 fused_ordering(234) 00:25:54.614 fused_ordering(235) 00:25:54.614 fused_ordering(236) 00:25:54.614 fused_ordering(237) 00:25:54.614 fused_ordering(238) 00:25:54.614 fused_ordering(239) 00:25:54.614 fused_ordering(240) 00:25:54.614 fused_ordering(241) 00:25:54.614 fused_ordering(242) 00:25:54.614 fused_ordering(243) 00:25:54.614 fused_ordering(244) 00:25:54.614 fused_ordering(245) 00:25:54.614 fused_ordering(246) 00:25:54.614 fused_ordering(247) 00:25:54.614 fused_ordering(248) 00:25:54.614 fused_ordering(249) 00:25:54.614 fused_ordering(250) 00:25:54.614 fused_ordering(251) 00:25:54.614 fused_ordering(252) 00:25:54.614 fused_ordering(253) 00:25:54.614 fused_ordering(254) 00:25:54.614 fused_ordering(255) 00:25:54.614 fused_ordering(256) 00:25:54.614 fused_ordering(257) 00:25:54.614 fused_ordering(258) 00:25:54.614 fused_ordering(259) 00:25:54.614 fused_ordering(260) 00:25:54.614 fused_ordering(261) 00:25:54.614 fused_ordering(262) 00:25:54.614 fused_ordering(263) 00:25:54.614 fused_ordering(264) 00:25:54.614 fused_ordering(265) 00:25:54.614 fused_ordering(266) 00:25:54.614 fused_ordering(267) 00:25:54.614 fused_ordering(268) 00:25:54.614 fused_ordering(269) 00:25:54.614 fused_ordering(270) 00:25:54.614 fused_ordering(271) 00:25:54.614 fused_ordering(272) 00:25:54.614 fused_ordering(273) 00:25:54.614 fused_ordering(274) 00:25:54.614 fused_ordering(275) 00:25:54.614 fused_ordering(276) 00:25:54.614 fused_ordering(277) 00:25:54.614 fused_ordering(278) 00:25:54.614 fused_ordering(279) 00:25:54.614 fused_ordering(280) 00:25:54.614 fused_ordering(281) 00:25:54.614 fused_ordering(282) 00:25:54.614 fused_ordering(283) 00:25:54.614 fused_ordering(284) 00:25:54.614 fused_ordering(285) 00:25:54.614 fused_ordering(286) 00:25:54.614 fused_ordering(287) 00:25:54.614 fused_ordering(288) 00:25:54.614 fused_ordering(289) 00:25:54.614 fused_ordering(290) 00:25:54.614 fused_ordering(291) 00:25:54.614 fused_ordering(292) 00:25:54.614 fused_ordering(293) 00:25:54.614 fused_ordering(294) 00:25:54.614 fused_ordering(295) 00:25:54.614 fused_ordering(296) 00:25:54.614 fused_ordering(297) 00:25:54.614 fused_ordering(298) 00:25:54.614 fused_ordering(299) 00:25:54.614 fused_ordering(300) 00:25:54.614 fused_ordering(301) 00:25:54.614 fused_ordering(302) 00:25:54.614 fused_ordering(303) 00:25:54.614 fused_ordering(304) 00:25:54.614 fused_ordering(305) 00:25:54.614 fused_ordering(306) 00:25:54.614 fused_ordering(307) 00:25:54.614 fused_ordering(308) 00:25:54.614 fused_ordering(309) 00:25:54.614 fused_ordering(310) 00:25:54.614 fused_ordering(311) 00:25:54.614 fused_ordering(312) 00:25:54.614 fused_ordering(313) 00:25:54.614 fused_ordering(314) 00:25:54.614 fused_ordering(315) 00:25:54.614 fused_ordering(316) 00:25:54.614 fused_ordering(317) 00:25:54.614 fused_ordering(318) 00:25:54.614 fused_ordering(319) 00:25:54.614 fused_ordering(320) 00:25:54.614 fused_ordering(321) 00:25:54.614 fused_ordering(322) 00:25:54.614 fused_ordering(323) 00:25:54.614 fused_ordering(324) 00:25:54.614 fused_ordering(325) 00:25:54.614 fused_ordering(326) 00:25:54.614 fused_ordering(327) 00:25:54.614 fused_ordering(328) 00:25:54.614 fused_ordering(329) 00:25:54.614 fused_ordering(330) 00:25:54.614 fused_ordering(331) 00:25:54.614 fused_ordering(332) 00:25:54.614 fused_ordering(333) 00:25:54.614 fused_ordering(334) 00:25:54.614 fused_ordering(335) 00:25:54.614 fused_ordering(336) 00:25:54.614 fused_ordering(337) 00:25:54.614 fused_ordering(338) 00:25:54.614 fused_ordering(339) 00:25:54.614 fused_ordering(340) 00:25:54.614 fused_ordering(341) 00:25:54.614 fused_ordering(342) 00:25:54.614 fused_ordering(343) 00:25:54.614 fused_ordering(344) 00:25:54.614 fused_ordering(345) 00:25:54.614 fused_ordering(346) 00:25:54.614 fused_ordering(347) 00:25:54.614 fused_ordering(348) 00:25:54.614 fused_ordering(349) 00:25:54.614 fused_ordering(350) 00:25:54.614 fused_ordering(351) 00:25:54.614 fused_ordering(352) 00:25:54.614 fused_ordering(353) 00:25:54.614 fused_ordering(354) 00:25:54.614 fused_ordering(355) 00:25:54.614 fused_ordering(356) 00:25:54.614 fused_ordering(357) 00:25:54.614 fused_ordering(358) 00:25:54.614 fused_ordering(359) 00:25:54.614 fused_ordering(360) 00:25:54.614 fused_ordering(361) 00:25:54.614 fused_ordering(362) 00:25:54.614 fused_ordering(363) 00:25:54.614 fused_ordering(364) 00:25:54.614 fused_ordering(365) 00:25:54.614 fused_ordering(366) 00:25:54.614 fused_ordering(367) 00:25:54.614 fused_ordering(368) 00:25:54.614 fused_ordering(369) 00:25:54.614 fused_ordering(370) 00:25:54.614 fused_ordering(371) 00:25:54.614 fused_ordering(372) 00:25:54.614 fused_ordering(373) 00:25:54.614 fused_ordering(374) 00:25:54.614 fused_ordering(375) 00:25:54.614 fused_ordering(376) 00:25:54.614 fused_ordering(377) 00:25:54.614 fused_ordering(378) 00:25:54.614 fused_ordering(379) 00:25:54.615 fused_ordering(380) 00:25:54.615 fused_ordering(381) 00:25:54.615 fused_ordering(382) 00:25:54.615 fused_ordering(383) 00:25:54.615 fused_ordering(384) 00:25:54.615 fused_ordering(385) 00:25:54.615 fused_ordering(386) 00:25:54.615 fused_ordering(387) 00:25:54.615 fused_ordering(388) 00:25:54.615 fused_ordering(389) 00:25:54.615 fused_ordering(390) 00:25:54.615 fused_ordering(391) 00:25:54.615 fused_ordering(392) 00:25:54.615 fused_ordering(393) 00:25:54.615 fused_ordering(394) 00:25:54.615 fused_ordering(395) 00:25:54.615 fused_ordering(396) 00:25:54.615 fused_ordering(397) 00:25:54.615 fused_ordering(398) 00:25:54.615 fused_ordering(399) 00:25:54.615 fused_ordering(400) 00:25:54.615 fused_ordering(401) 00:25:54.615 fused_ordering(402) 00:25:54.615 fused_ordering(403) 00:25:54.615 fused_ordering(404) 00:25:54.615 fused_ordering(405) 00:25:54.615 fused_ordering(406) 00:25:54.615 fused_ordering(407) 00:25:54.615 fused_ordering(408) 00:25:54.615 fused_ordering(409) 00:25:54.615 fused_ordering(410) 00:25:55.187 fused_ordering(411) 00:25:55.187 fused_ordering(412) 00:25:55.187 fused_ordering(413) 00:25:55.187 fused_ordering(414) 00:25:55.187 fused_ordering(415) 00:25:55.187 fused_ordering(416) 00:25:55.187 fused_ordering(417) 00:25:55.187 fused_ordering(418) 00:25:55.187 fused_ordering(419) 00:25:55.187 fused_ordering(420) 00:25:55.187 fused_ordering(421) 00:25:55.187 fused_ordering(422) 00:25:55.187 fused_ordering(423) 00:25:55.187 fused_ordering(424) 00:25:55.187 fused_ordering(425) 00:25:55.187 fused_ordering(426) 00:25:55.187 fused_ordering(427) 00:25:55.187 fused_ordering(428) 00:25:55.187 fused_ordering(429) 00:25:55.187 fused_ordering(430) 00:25:55.187 fused_ordering(431) 00:25:55.187 fused_ordering(432) 00:25:55.187 fused_ordering(433) 00:25:55.187 fused_ordering(434) 00:25:55.187 fused_ordering(435) 00:25:55.187 fused_ordering(436) 00:25:55.187 fused_ordering(437) 00:25:55.187 fused_ordering(438) 00:25:55.187 fused_ordering(439) 00:25:55.187 fused_ordering(440) 00:25:55.187 fused_ordering(441) 00:25:55.187 fused_ordering(442) 00:25:55.187 fused_ordering(443) 00:25:55.187 fused_ordering(444) 00:25:55.187 fused_ordering(445) 00:25:55.187 fused_ordering(446) 00:25:55.187 fused_ordering(447) 00:25:55.187 fused_ordering(448) 00:25:55.187 fused_ordering(449) 00:25:55.187 fused_ordering(450) 00:25:55.187 fused_ordering(451) 00:25:55.187 fused_ordering(452) 00:25:55.187 fused_ordering(453) 00:25:55.187 fused_ordering(454) 00:25:55.187 fused_ordering(455) 00:25:55.187 fused_ordering(456) 00:25:55.187 fused_ordering(457) 00:25:55.187 fused_ordering(458) 00:25:55.187 fused_ordering(459) 00:25:55.187 fused_ordering(460) 00:25:55.187 fused_ordering(461) 00:25:55.187 fused_ordering(462) 00:25:55.187 fused_ordering(463) 00:25:55.187 fused_ordering(464) 00:25:55.187 fused_ordering(465) 00:25:55.187 fused_ordering(466) 00:25:55.187 fused_ordering(467) 00:25:55.187 fused_ordering(468) 00:25:55.187 fused_ordering(469) 00:25:55.187 fused_ordering(470) 00:25:55.187 fused_ordering(471) 00:25:55.187 fused_ordering(472) 00:25:55.187 fused_ordering(473) 00:25:55.187 fused_ordering(474) 00:25:55.187 fused_ordering(475) 00:25:55.187 fused_ordering(476) 00:25:55.187 fused_ordering(477) 00:25:55.187 fused_ordering(478) 00:25:55.187 fused_ordering(479) 00:25:55.187 fused_ordering(480) 00:25:55.187 fused_ordering(481) 00:25:55.187 fused_ordering(482) 00:25:55.187 fused_ordering(483) 00:25:55.187 fused_ordering(484) 00:25:55.187 fused_ordering(485) 00:25:55.187 fused_ordering(486) 00:25:55.187 fused_ordering(487) 00:25:55.187 fused_ordering(488) 00:25:55.187 fused_ordering(489) 00:25:55.187 fused_ordering(490) 00:25:55.187 fused_ordering(491) 00:25:55.187 fused_ordering(492) 00:25:55.187 fused_ordering(493) 00:25:55.187 fused_ordering(494) 00:25:55.187 fused_ordering(495) 00:25:55.187 fused_ordering(496) 00:25:55.187 fused_ordering(497) 00:25:55.187 fused_ordering(498) 00:25:55.187 fused_ordering(499) 00:25:55.187 fused_ordering(500) 00:25:55.187 fused_ordering(501) 00:25:55.187 fused_ordering(502) 00:25:55.187 fused_ordering(503) 00:25:55.187 fused_ordering(504) 00:25:55.187 fused_ordering(505) 00:25:55.187 fused_ordering(506) 00:25:55.187 fused_ordering(507) 00:25:55.187 fused_ordering(508) 00:25:55.187 fused_ordering(509) 00:25:55.187 fused_ordering(510) 00:25:55.187 fused_ordering(511) 00:25:55.187 fused_ordering(512) 00:25:55.187 fused_ordering(513) 00:25:55.187 fused_ordering(514) 00:25:55.187 fused_ordering(515) 00:25:55.187 fused_ordering(516) 00:25:55.187 fused_ordering(517) 00:25:55.187 fused_ordering(518) 00:25:55.187 fused_ordering(519) 00:25:55.187 fused_ordering(520) 00:25:55.187 fused_ordering(521) 00:25:55.187 fused_ordering(522) 00:25:55.187 fused_ordering(523) 00:25:55.187 fused_ordering(524) 00:25:55.187 fused_ordering(525) 00:25:55.187 fused_ordering(526) 00:25:55.187 fused_ordering(527) 00:25:55.187 fused_ordering(528) 00:25:55.187 fused_ordering(529) 00:25:55.187 fused_ordering(530) 00:25:55.187 fused_ordering(531) 00:25:55.187 fused_ordering(532) 00:25:55.187 fused_ordering(533) 00:25:55.187 fused_ordering(534) 00:25:55.187 fused_ordering(535) 00:25:55.187 fused_ordering(536) 00:25:55.187 fused_ordering(537) 00:25:55.187 fused_ordering(538) 00:25:55.187 fused_ordering(539) 00:25:55.187 fused_ordering(540) 00:25:55.187 fused_ordering(541) 00:25:55.187 fused_ordering(542) 00:25:55.187 fused_ordering(543) 00:25:55.187 fused_ordering(544) 00:25:55.187 fused_ordering(545) 00:25:55.187 fused_ordering(546) 00:25:55.188 fused_ordering(547) 00:25:55.188 fused_ordering(548) 00:25:55.188 fused_ordering(549) 00:25:55.188 fused_ordering(550) 00:25:55.188 fused_ordering(551) 00:25:55.188 fused_ordering(552) 00:25:55.188 fused_ordering(553) 00:25:55.188 fused_ordering(554) 00:25:55.188 fused_ordering(555) 00:25:55.188 fused_ordering(556) 00:25:55.188 fused_ordering(557) 00:25:55.188 fused_ordering(558) 00:25:55.188 fused_ordering(559) 00:25:55.188 fused_ordering(560) 00:25:55.188 fused_ordering(561) 00:25:55.188 fused_ordering(562) 00:25:55.188 fused_ordering(563) 00:25:55.188 fused_ordering(564) 00:25:55.188 fused_ordering(565) 00:25:55.188 fused_ordering(566) 00:25:55.188 fused_ordering(567) 00:25:55.188 fused_ordering(568) 00:25:55.188 fused_ordering(569) 00:25:55.188 fused_ordering(570) 00:25:55.188 fused_ordering(571) 00:25:55.188 fused_ordering(572) 00:25:55.188 fused_ordering(573) 00:25:55.188 fused_ordering(574) 00:25:55.188 fused_ordering(575) 00:25:55.188 fused_ordering(576) 00:25:55.188 fused_ordering(577) 00:25:55.188 fused_ordering(578) 00:25:55.188 fused_ordering(579) 00:25:55.188 fused_ordering(580) 00:25:55.188 fused_ordering(581) 00:25:55.188 fused_ordering(582) 00:25:55.188 fused_ordering(583) 00:25:55.188 fused_ordering(584) 00:25:55.188 fused_ordering(585) 00:25:55.188 fused_ordering(586) 00:25:55.188 fused_ordering(587) 00:25:55.188 fused_ordering(588) 00:25:55.188 fused_ordering(589) 00:25:55.188 fused_ordering(590) 00:25:55.188 fused_ordering(591) 00:25:55.188 fused_ordering(592) 00:25:55.188 fused_ordering(593) 00:25:55.188 fused_ordering(594) 00:25:55.188 fused_ordering(595) 00:25:55.188 fused_ordering(596) 00:25:55.188 fused_ordering(597) 00:25:55.188 fused_ordering(598) 00:25:55.188 fused_ordering(599) 00:25:55.188 fused_ordering(600) 00:25:55.188 fused_ordering(601) 00:25:55.188 fused_ordering(602) 00:25:55.188 fused_ordering(603) 00:25:55.188 fused_ordering(604) 00:25:55.188 fused_ordering(605) 00:25:55.188 fused_ordering(606) 00:25:55.188 fused_ordering(607) 00:25:55.188 fused_ordering(608) 00:25:55.188 fused_ordering(609) 00:25:55.188 fused_ordering(610) 00:25:55.188 fused_ordering(611) 00:25:55.188 fused_ordering(612) 00:25:55.188 fused_ordering(613) 00:25:55.188 fused_ordering(614) 00:25:55.188 fused_ordering(615) 00:25:55.758 fused_ordering(616) 00:25:55.758 fused_ordering(617) 00:25:55.758 fused_ordering(618) 00:25:55.758 fused_ordering(619) 00:25:55.758 fused_ordering(620) 00:25:55.758 fused_ordering(621) 00:25:55.758 fused_ordering(622) 00:25:55.758 fused_ordering(623) 00:25:55.758 fused_ordering(624) 00:25:55.758 fused_ordering(625) 00:25:55.758 fused_ordering(626) 00:25:55.758 fused_ordering(627) 00:25:55.758 fused_ordering(628) 00:25:55.758 fused_ordering(629) 00:25:55.758 fused_ordering(630) 00:25:55.758 fused_ordering(631) 00:25:55.758 fused_ordering(632) 00:25:55.758 fused_ordering(633) 00:25:55.758 fused_ordering(634) 00:25:55.758 fused_ordering(635) 00:25:55.758 fused_ordering(636) 00:25:55.758 fused_ordering(637) 00:25:55.758 fused_ordering(638) 00:25:55.758 fused_ordering(639) 00:25:55.758 fused_ordering(640) 00:25:55.758 fused_ordering(641) 00:25:55.758 fused_ordering(642) 00:25:55.758 fused_ordering(643) 00:25:55.758 fused_ordering(644) 00:25:55.758 fused_ordering(645) 00:25:55.758 fused_ordering(646) 00:25:55.758 fused_ordering(647) 00:25:55.758 fused_ordering(648) 00:25:55.758 fused_ordering(649) 00:25:55.758 fused_ordering(650) 00:25:55.758 fused_ordering(651) 00:25:55.758 fused_ordering(652) 00:25:55.758 fused_ordering(653) 00:25:55.758 fused_ordering(654) 00:25:55.758 fused_ordering(655) 00:25:55.758 fused_ordering(656) 00:25:55.758 fused_ordering(657) 00:25:55.758 fused_ordering(658) 00:25:55.758 fused_ordering(659) 00:25:55.758 fused_ordering(660) 00:25:55.758 fused_ordering(661) 00:25:55.758 fused_ordering(662) 00:25:55.758 fused_ordering(663) 00:25:55.758 fused_ordering(664) 00:25:55.758 fused_ordering(665) 00:25:55.758 fused_ordering(666) 00:25:55.758 fused_ordering(667) 00:25:55.758 fused_ordering(668) 00:25:55.758 fused_ordering(669) 00:25:55.758 fused_ordering(670) 00:25:55.758 fused_ordering(671) 00:25:55.758 fused_ordering(672) 00:25:55.758 fused_ordering(673) 00:25:55.758 fused_ordering(674) 00:25:55.758 fused_ordering(675) 00:25:55.758 fused_ordering(676) 00:25:55.758 fused_ordering(677) 00:25:55.758 fused_ordering(678) 00:25:55.758 fused_ordering(679) 00:25:55.758 fused_ordering(680) 00:25:55.758 fused_ordering(681) 00:25:55.758 fused_ordering(682) 00:25:55.758 fused_ordering(683) 00:25:55.758 fused_ordering(684) 00:25:55.758 fused_ordering(685) 00:25:55.758 fused_ordering(686) 00:25:55.758 fused_ordering(687) 00:25:55.758 fused_ordering(688) 00:25:55.758 fused_ordering(689) 00:25:55.758 fused_ordering(690) 00:25:55.758 fused_ordering(691) 00:25:55.758 fused_ordering(692) 00:25:55.758 fused_ordering(693) 00:25:55.758 fused_ordering(694) 00:25:55.758 fused_ordering(695) 00:25:55.758 fused_ordering(696) 00:25:55.758 fused_ordering(697) 00:25:55.758 fused_ordering(698) 00:25:55.758 fused_ordering(699) 00:25:55.758 fused_ordering(700) 00:25:55.758 fused_ordering(701) 00:25:55.758 fused_ordering(702) 00:25:55.758 fused_ordering(703) 00:25:55.758 fused_ordering(704) 00:25:55.758 fused_ordering(705) 00:25:55.758 fused_ordering(706) 00:25:55.758 fused_ordering(707) 00:25:55.758 fused_ordering(708) 00:25:55.758 fused_ordering(709) 00:25:55.758 fused_ordering(710) 00:25:55.758 fused_ordering(711) 00:25:55.758 fused_ordering(712) 00:25:55.758 fused_ordering(713) 00:25:55.758 fused_ordering(714) 00:25:55.758 fused_ordering(715) 00:25:55.758 fused_ordering(716) 00:25:55.758 fused_ordering(717) 00:25:55.758 fused_ordering(718) 00:25:55.758 fused_ordering(719) 00:25:55.758 fused_ordering(720) 00:25:55.758 fused_ordering(721) 00:25:55.758 fused_ordering(722) 00:25:55.758 fused_ordering(723) 00:25:55.758 fused_ordering(724) 00:25:55.758 fused_ordering(725) 00:25:55.758 fused_ordering(726) 00:25:55.758 fused_ordering(727) 00:25:55.758 fused_ordering(728) 00:25:55.758 fused_ordering(729) 00:25:55.758 fused_ordering(730) 00:25:55.758 fused_ordering(731) 00:25:55.758 fused_ordering(732) 00:25:55.758 fused_ordering(733) 00:25:55.758 fused_ordering(734) 00:25:55.758 fused_ordering(735) 00:25:55.758 fused_ordering(736) 00:25:55.758 fused_ordering(737) 00:25:55.758 fused_ordering(738) 00:25:55.758 fused_ordering(739) 00:25:55.758 fused_ordering(740) 00:25:55.758 fused_ordering(741) 00:25:55.758 fused_ordering(742) 00:25:55.758 fused_ordering(743) 00:25:55.758 fused_ordering(744) 00:25:55.758 fused_ordering(745) 00:25:55.758 fused_ordering(746) 00:25:55.758 fused_ordering(747) 00:25:55.758 fused_ordering(748) 00:25:55.758 fused_ordering(749) 00:25:55.758 fused_ordering(750) 00:25:55.758 fused_ordering(751) 00:25:55.758 fused_ordering(752) 00:25:55.758 fused_ordering(753) 00:25:55.758 fused_ordering(754) 00:25:55.758 fused_ordering(755) 00:25:55.758 fused_ordering(756) 00:25:55.758 fused_ordering(757) 00:25:55.758 fused_ordering(758) 00:25:55.758 fused_ordering(759) 00:25:55.758 fused_ordering(760) 00:25:55.758 fused_ordering(761) 00:25:55.758 fused_ordering(762) 00:25:55.758 fused_ordering(763) 00:25:55.758 fused_ordering(764) 00:25:55.758 fused_ordering(765) 00:25:55.758 fused_ordering(766) 00:25:55.758 fused_ordering(767) 00:25:55.758 fused_ordering(768) 00:25:55.758 fused_ordering(769) 00:25:55.758 fused_ordering(770) 00:25:55.758 fused_ordering(771) 00:25:55.758 fused_ordering(772) 00:25:55.758 fused_ordering(773) 00:25:55.758 fused_ordering(774) 00:25:55.758 fused_ordering(775) 00:25:55.758 fused_ordering(776) 00:25:55.758 fused_ordering(777) 00:25:55.758 fused_ordering(778) 00:25:55.758 fused_ordering(779) 00:25:55.758 fused_ordering(780) 00:25:55.758 fused_ordering(781) 00:25:55.758 fused_ordering(782) 00:25:55.758 fused_ordering(783) 00:25:55.758 fused_ordering(784) 00:25:55.758 fused_ordering(785) 00:25:55.758 fused_ordering(786) 00:25:55.758 fused_ordering(787) 00:25:55.759 fused_ordering(788) 00:25:55.759 fused_ordering(789) 00:25:55.759 fused_ordering(790) 00:25:55.759 fused_ordering(791) 00:25:55.759 fused_ordering(792) 00:25:55.759 fused_ordering(793) 00:25:55.759 fused_ordering(794) 00:25:55.759 fused_ordering(795) 00:25:55.759 fused_ordering(796) 00:25:55.759 fused_ordering(797) 00:25:55.759 fused_ordering(798) 00:25:55.759 fused_ordering(799) 00:25:55.759 fused_ordering(800) 00:25:55.759 fused_ordering(801) 00:25:55.759 fused_ordering(802) 00:25:55.759 fused_ordering(803) 00:25:55.759 fused_ordering(804) 00:25:55.759 fused_ordering(805) 00:25:55.759 fused_ordering(806) 00:25:55.759 fused_ordering(807) 00:25:55.759 fused_ordering(808) 00:25:55.759 fused_ordering(809) 00:25:55.759 fused_ordering(810) 00:25:55.759 fused_ordering(811) 00:25:55.759 fused_ordering(812) 00:25:55.759 fused_ordering(813) 00:25:55.759 fused_ordering(814) 00:25:55.759 fused_ordering(815) 00:25:55.759 fused_ordering(816) 00:25:55.759 fused_ordering(817) 00:25:55.759 fused_ordering(818) 00:25:55.759 fused_ordering(819) 00:25:55.759 fused_ordering(820) 00:25:56.328 fused_ordering(821) 00:25:56.328 fused_ordering(822) 00:25:56.328 fused_ordering(823) 00:25:56.328 fused_ordering(824) 00:25:56.328 fused_ordering(825) 00:25:56.328 fused_ordering(826) 00:25:56.328 fused_ordering(827) 00:25:56.328 fused_ordering(828) 00:25:56.328 fused_ordering(829) 00:25:56.328 fused_ordering(830) 00:25:56.328 fused_ordering(831) 00:25:56.328 fused_ordering(832) 00:25:56.328 fused_ordering(833) 00:25:56.328 fused_ordering(834) 00:25:56.328 fused_ordering(835) 00:25:56.328 fused_ordering(836) 00:25:56.328 fused_ordering(837) 00:25:56.328 fused_ordering(838) 00:25:56.328 fused_ordering(839) 00:25:56.328 fused_ordering(840) 00:25:56.328 fused_ordering(841) 00:25:56.328 fused_ordering(842) 00:25:56.328 fused_ordering(843) 00:25:56.328 fused_ordering(844) 00:25:56.328 fused_ordering(845) 00:25:56.328 fused_ordering(846) 00:25:56.328 fused_ordering(847) 00:25:56.328 fused_ordering(848) 00:25:56.328 fused_ordering(849) 00:25:56.328 fused_ordering(850) 00:25:56.328 fused_ordering(851) 00:25:56.328 fused_ordering(852) 00:25:56.328 fused_ordering(853) 00:25:56.328 fused_ordering(854) 00:25:56.328 fused_ordering(855) 00:25:56.328 fused_ordering(856) 00:25:56.328 fused_ordering(857) 00:25:56.328 fused_ordering(858) 00:25:56.328 fused_ordering(859) 00:25:56.328 fused_ordering(860) 00:25:56.328 fused_ordering(861) 00:25:56.328 fused_ordering(862) 00:25:56.328 fused_ordering(863) 00:25:56.328 fused_ordering(864) 00:25:56.328 fused_ordering(865) 00:25:56.328 fused_ordering(866) 00:25:56.328 fused_ordering(867) 00:25:56.328 fused_ordering(868) 00:25:56.328 fused_ordering(869) 00:25:56.328 fused_ordering(870) 00:25:56.328 fused_ordering(871) 00:25:56.328 fused_ordering(872) 00:25:56.328 fused_ordering(873) 00:25:56.328 fused_ordering(874) 00:25:56.328 fused_ordering(875) 00:25:56.328 fused_ordering(876) 00:25:56.328 fused_ordering(877) 00:25:56.328 fused_ordering(878) 00:25:56.328 fused_ordering(879) 00:25:56.328 fused_ordering(880) 00:25:56.328 fused_ordering(881) 00:25:56.328 fused_ordering(882) 00:25:56.328 fused_ordering(883) 00:25:56.328 fused_ordering(884) 00:25:56.328 fused_ordering(885) 00:25:56.328 fused_ordering(886) 00:25:56.328 fused_ordering(887) 00:25:56.328 fused_ordering(888) 00:25:56.328 fused_ordering(889) 00:25:56.328 fused_ordering(890) 00:25:56.328 fused_ordering(891) 00:25:56.328 fused_ordering(892) 00:25:56.328 fused_ordering(893) 00:25:56.328 fused_ordering(894) 00:25:56.328 fused_ordering(895) 00:25:56.328 fused_ordering(896) 00:25:56.328 fused_ordering(897) 00:25:56.328 fused_ordering(898) 00:25:56.328 fused_ordering(899) 00:25:56.328 fused_ordering(900) 00:25:56.328 fused_ordering(901) 00:25:56.328 fused_ordering(902) 00:25:56.328 fused_ordering(903) 00:25:56.328 fused_ordering(904) 00:25:56.328 fused_ordering(905) 00:25:56.328 fused_ordering(906) 00:25:56.328 fused_ordering(907) 00:25:56.328 fused_ordering(908) 00:25:56.328 fused_ordering(909) 00:25:56.328 fused_ordering(910) 00:25:56.328 fused_ordering(911) 00:25:56.328 fused_ordering(912) 00:25:56.328 fused_ordering(913) 00:25:56.328 fused_ordering(914) 00:25:56.328 fused_ordering(915) 00:25:56.328 fused_ordering(916) 00:25:56.328 fused_ordering(917) 00:25:56.328 fused_ordering(918) 00:25:56.328 fused_ordering(919) 00:25:56.328 fused_ordering(920) 00:25:56.328 fused_ordering(921) 00:25:56.328 fused_ordering(922) 00:25:56.328 fused_ordering(923) 00:25:56.328 fused_ordering(924) 00:25:56.328 fused_ordering(925) 00:25:56.328 fused_ordering(926) 00:25:56.328 fused_ordering(927) 00:25:56.328 fused_ordering(928) 00:25:56.328 fused_ordering(929) 00:25:56.328 fused_ordering(930) 00:25:56.328 fused_ordering(931) 00:25:56.328 fused_ordering(932) 00:25:56.328 fused_ordering(933) 00:25:56.328 fused_ordering(934) 00:25:56.328 fused_ordering(935) 00:25:56.328 fused_ordering(936) 00:25:56.328 fused_ordering(937) 00:25:56.328 fused_ordering(938) 00:25:56.328 fused_ordering(939) 00:25:56.328 fused_ordering(940) 00:25:56.328 fused_ordering(941) 00:25:56.328 fused_ordering(942) 00:25:56.328 fused_ordering(943) 00:25:56.328 fused_ordering(944) 00:25:56.328 fused_ordering(945) 00:25:56.328 fused_ordering(946) 00:25:56.328 fused_ordering(947) 00:25:56.328 fused_ordering(948) 00:25:56.328 fused_ordering(949) 00:25:56.328 fused_ordering(950) 00:25:56.328 fused_ordering(951) 00:25:56.328 fused_ordering(952) 00:25:56.328 fused_ordering(953) 00:25:56.328 fused_ordering(954) 00:25:56.328 fused_ordering(955) 00:25:56.328 fused_ordering(956) 00:25:56.328 fused_ordering(957) 00:25:56.328 fused_ordering(958) 00:25:56.328 fused_ordering(959) 00:25:56.328 fused_ordering(960) 00:25:56.328 fused_ordering(961) 00:25:56.328 fused_ordering(962) 00:25:56.328 fused_ordering(963) 00:25:56.328 fused_ordering(964) 00:25:56.328 fused_ordering(965) 00:25:56.328 fused_ordering(966) 00:25:56.328 fused_ordering(967) 00:25:56.328 fused_ordering(968) 00:25:56.328 fused_ordering(969) 00:25:56.328 fused_ordering(970) 00:25:56.328 fused_ordering(971) 00:25:56.328 fused_ordering(972) 00:25:56.328 fused_ordering(973) 00:25:56.328 fused_ordering(974) 00:25:56.328 fused_ordering(975) 00:25:56.328 fused_ordering(976) 00:25:56.328 fused_ordering(977) 00:25:56.328 fused_ordering(978) 00:25:56.328 fused_ordering(979) 00:25:56.328 fused_ordering(980) 00:25:56.328 fused_ordering(981) 00:25:56.328 fused_ordering(982) 00:25:56.328 fused_ordering(983) 00:25:56.328 fused_ordering(984) 00:25:56.328 fused_ordering(985) 00:25:56.328 fused_ordering(986) 00:25:56.328 fused_ordering(987) 00:25:56.328 fused_ordering(988) 00:25:56.328 fused_ordering(989) 00:25:56.328 fused_ordering(990) 00:25:56.328 fused_ordering(991) 00:25:56.328 fused_ordering(992) 00:25:56.328 fused_ordering(993) 00:25:56.328 fused_ordering(994) 00:25:56.328 fused_ordering(995) 00:25:56.328 fused_ordering(996) 00:25:56.328 fused_ordering(997) 00:25:56.328 fused_ordering(998) 00:25:56.328 fused_ordering(999) 00:25:56.328 fused_ordering(1000) 00:25:56.328 fused_ordering(1001) 00:25:56.328 fused_ordering(1002) 00:25:56.328 fused_ordering(1003) 00:25:56.328 fused_ordering(1004) 00:25:56.328 fused_ordering(1005) 00:25:56.328 fused_ordering(1006) 00:25:56.328 fused_ordering(1007) 00:25:56.328 fused_ordering(1008) 00:25:56.328 fused_ordering(1009) 00:25:56.328 fused_ordering(1010) 00:25:56.328 fused_ordering(1011) 00:25:56.328 fused_ordering(1012) 00:25:56.328 fused_ordering(1013) 00:25:56.328 fused_ordering(1014) 00:25:56.328 fused_ordering(1015) 00:25:56.328 fused_ordering(1016) 00:25:56.328 fused_ordering(1017) 00:25:56.328 fused_ordering(1018) 00:25:56.328 fused_ordering(1019) 00:25:56.328 fused_ordering(1020) 00:25:56.328 fused_ordering(1021) 00:25:56.328 fused_ordering(1022) 00:25:56.328 fused_ordering(1023) 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:56.329 rmmod nvme_tcp 00:25:56.329 rmmod nvme_fabrics 00:25:56.329 rmmod nvme_keyring 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2215761 ']' 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2215761 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2215761 ']' 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2215761 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2215761 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2215761' 00:25:56.329 killing process with pid 2215761 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2215761 00:25:56.329 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2215761 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.590 11:34:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.500 11:34:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:58.500 00:25:58.500 real 0m12.299s 00:25:58.500 user 0m6.316s 00:25:58.500 sys 0m6.698s 00:25:58.500 11:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:58.500 11:34:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:25:58.500 ************************************ 00:25:58.500 END TEST nvmf_fused_ordering 00:25:58.500 ************************************ 00:25:58.500 11:34:27 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:25:58.500 11:34:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:58.500 11:34:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:58.500 11:34:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.762 ************************************ 00:25:58.762 START TEST nvmf_delete_subsystem 00:25:58.762 ************************************ 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:25:58.762 * Looking for test storage... 00:25:58.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.762 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:25:58.763 11:34:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:05.347 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.347 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:26:05.347 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:05.347 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:05.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:05.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:05.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:05.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.348 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:05.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:26:05.609 00:26:05.609 --- 10.0.0.2 ping statistics --- 00:26:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.609 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:26:05.609 00:26:05.609 --- 10.0.0.1 ping statistics --- 00:26:05.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.609 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.609 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2220500 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2220500 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2220500 ']' 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:05.610 11:34:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:05.870 [2024-06-10 11:34:34.631584] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:05.870 [2024-06-10 11:34:34.631644] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.870 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.870 [2024-06-10 11:34:34.701441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:05.870 [2024-06-10 11:34:34.775834] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.870 [2024-06-10 11:34:34.775871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.870 [2024-06-10 11:34:34.775879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.870 [2024-06-10 11:34:34.775885] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.870 [2024-06-10 11:34:34.775891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.870 [2024-06-10 11:34:34.776033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.870 [2024-06-10 11:34:34.776038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 [2024-06-10 11:34:35.531516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 [2024-06-10 11:34:35.547663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 NULL1 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.811 Delay0 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:06.811 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.812 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:06.812 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.812 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2220573 00:26:06.812 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:06.812 11:34:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:06.812 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.812 [2024-06-10 11:34:35.622266] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:08.772 11:34:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.772 11:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.772 11:34:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 Write completed with error (sct=0, sc=8) 00:26:09.034 starting I/O failed: -6 00:26:09.034 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 [2024-06-10 11:34:37.826569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55c80 is same with the state(5) to be set 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 [2024-06-10 11:34:37.827574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56040 is same with the state(5) to be set 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 starting I/O failed: -6 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 [2024-06-10 11:34:37.831750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb424000c00 is same with the state(5) to be set 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Write completed with error (sct=0, sc=8) 00:26:09.035 Read completed with error (sct=0, sc=8) 00:26:09.980 [2024-06-10 11:34:38.802734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb35550 is same with the state(5) to be set 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 [2024-06-10 11:34:38.829874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb55e60 is same with the state(5) to be set 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 [2024-06-10 11:34:38.830293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56220 is same with the state(5) to be set 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 [2024-06-10 11:34:38.832931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb42400c780 is same with the state(5) to be set 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Write completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.980 Read completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Write completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 Read completed with error (sct=0, sc=8) 00:26:09.981 [2024-06-10 11:34:38.834104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb42400bfe0 is same with the state(5) to be set 00:26:09.981 Initializing NVMe Controllers 00:26:09.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:09.981 Controller IO queue size 128, less than required. 00:26:09.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:09.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:09.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:09.981 Initialization complete. Launching workers. 00:26:09.981 ======================================================== 00:26:09.981 Latency(us) 00:26:09.981 Device Information : IOPS MiB/s Average min max 00:26:09.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.92 0.08 899034.83 598.39 1006185.98 00:26:09.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.93 0.08 970554.29 312.42 2003088.98 00:26:09.981 ======================================================== 00:26:09.981 Total : 330.85 0.16 934256.01 312.42 2003088.98 00:26:09.981 00:26:09.981 [2024-06-10 11:34:38.834637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb35550 (9): Bad file descriptor 00:26:09.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:09.981 11:34:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.981 11:34:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:09.981 11:34:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2220573 00:26:09.981 11:34:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2220573 00:26:10.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2220573) - No such process 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2220573 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2220573 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2220573 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.554 [2024-06-10 11:34:39.363898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2221395 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:10.554 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:10.554 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.554 [2024-06-10 11:34:39.433236] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:11.126 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:11.126 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:11.126 11:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:11.697 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:11.697 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:11.697 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:11.958 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:11.958 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:11.958 11:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:12.529 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:12.529 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:12.529 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:13.100 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:13.100 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:13.100 11:34:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:13.670 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:13.670 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:13.670 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:13.670 Initializing NVMe Controllers 00:26:13.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.670 Controller IO queue size 128, less than required. 00:26:13.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:13.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:13.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:13.670 Initialization complete. Launching workers. 00:26:13.670 ======================================================== 00:26:13.670 Latency(us) 00:26:13.670 Device Information : IOPS MiB/s Average min max 00:26:13.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002497.02 1000186.76 1006620.25 00:26:13.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003896.49 1000378.74 1010756.57 00:26:13.670 ======================================================== 00:26:13.670 Total : 256.00 0.12 1003196.76 1000186.76 1010756.57 00:26:13.670 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2221395 00:26:14.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2221395) - No such process 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2221395 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.240 rmmod nvme_tcp 00:26:14.240 rmmod nvme_fabrics 00:26:14.240 rmmod nvme_keyring 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2220500 ']' 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2220500 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2220500 ']' 00:26:14.240 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2220500 00:26:14.241 11:34:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2220500 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2220500' 00:26:14.241 killing process with pid 2220500 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2220500 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2220500 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.241 11:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.786 11:34:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:16.786 00:26:16.786 real 0m17.778s 00:26:16.786 user 0m30.900s 00:26:16.786 sys 0m6.148s 00:26:16.786 11:34:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:16.786 11:34:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:16.786 ************************************ 00:26:16.786 END TEST nvmf_delete_subsystem 00:26:16.786 ************************************ 00:26:16.786 11:34:45 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:26:16.786 11:34:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:16.786 11:34:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:16.786 11:34:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:16.786 ************************************ 00:26:16.786 START TEST nvmf_ns_masking 00:26:16.786 ************************************ 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:26:16.786 * Looking for test storage... 00:26:16.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=34be2a8e-6fa0-4694-9254-d5494c746e98 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:26:16.786 11:34:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.371 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:23.372 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:23.372 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:23.372 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:23.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.372 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:26:23.633 00:26:23.633 --- 10.0.0.2 ping statistics --- 00:26:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.633 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:26:23.633 00:26:23.633 --- 10.0.0.1 ping statistics --- 00:26:23.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.633 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2226249 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2226249 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2226249 ']' 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:23.633 11:34:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:23.633 [2024-06-10 11:34:52.521683] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:23.633 [2024-06-10 11:34:52.521747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.633 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.633 [2024-06-10 11:34:52.591606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.894 [2024-06-10 11:34:52.667972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.894 [2024-06-10 11:34:52.668011] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.894 [2024-06-10 11:34:52.668019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.894 [2024-06-10 11:34:52.668026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.894 [2024-06-10 11:34:52.668031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.894 [2024-06-10 11:34:52.668148] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.894 [2024-06-10 11:34:52.668287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.894 [2024-06-10 11:34:52.668446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.894 [2024-06-10 11:34:52.668447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.464 11:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:24.464 11:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:26:24.464 11:34:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.464 11:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:24.464 11:34:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:24.724 11:34:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.724 11:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:24.724 [2024-06-10 11:34:53.631272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.724 11:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:26:24.724 11:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:26:24.724 11:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:24.983 Malloc1 00:26:24.983 11:34:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:25.243 Malloc2 00:26:25.243 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:25.503 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:26:25.763 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.763 [2024-06-10 11:34:54.710029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34be2a8e-6fa0-4694-9254-d5494c746e98 -a 10.0.0.2 -s 4420 -i 4 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:26.024 11:34:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:27.934 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:28.195 [ 0]:0x1 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2c465630bff247ab9b1c20497ce9b3ee 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2c465630bff247ab9b1c20497ce9b3ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:28.195 11:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:28.455 [ 0]:0x1 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2c465630bff247ab9b1c20497ce9b3ee 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2c465630bff247ab9b1c20497ce9b3ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:28.455 [ 1]:0x2 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:26:28.455 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:28.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:28.715 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.976 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:26:29.237 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:26:29.237 11:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34be2a8e-6fa0-4694-9254-d5494c746e98 -a 10.0.0.2 -s 4420 -i 4 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:26:29.237 11:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:31.148 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:31.409 [ 0]:0x2 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:31.409 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:31.669 [ 0]:0x1 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2c465630bff247ab9b1c20497ce9b3ee 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2c465630bff247ab9b1c20497ce9b3ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:31.669 [ 1]:0x2 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:31.669 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:31.929 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:31.929 [ 0]:0x2 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:32.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:32.189 11:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 34be2a8e-6fa0-4694-9254-d5494c746e98 -a 10.0.0.2 -s 4420 -i 4 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:26:32.449 11:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:35.012 [ 0]:0x1 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=2c465630bff247ab9b1c20497ce9b3ee 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 2c465630bff247ab9b1c20497ce9b3ee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:35.012 [ 1]:0x2 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:26:35.012 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:35.013 11:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:35.274 [ 0]:0x2 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:26:35.274 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:26:35.535 [2024-06-10 11:35:04.252871] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:26:35.535 request: 00:26:35.535 { 00:26:35.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.535 "nsid": 2, 00:26:35.535 "host": "nqn.2016-06.io.spdk:host1", 00:26:35.535 "method": "nvmf_ns_remove_host", 00:26:35.535 "req_id": 1 00:26:35.535 } 00:26:35.535 Got JSON-RPC error response 00:26:35.535 response: 00:26:35.536 { 00:26:35.536 "code": -32602, 00:26:35.536 "message": "Invalid parameters" 00:26:35.536 } 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:26:35.536 [ 0]:0x2 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=eb8798c4b84d402ebd30a608cc33f007 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ eb8798c4b84d402ebd30a608cc33f007 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:35.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:35.536 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.796 rmmod nvme_tcp 00:26:35.796 rmmod nvme_fabrics 00:26:35.796 rmmod nvme_keyring 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2226249 ']' 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2226249 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2226249 ']' 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2226249 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2226249 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:35.796 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2226249' 00:26:35.796 killing process with pid 2226249 00:26:35.797 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2226249 00:26:35.797 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2226249 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.058 11:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.609 11:35:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:38.609 00:26:38.609 real 0m21.655s 00:26:38.609 user 0m54.167s 00:26:38.609 sys 0m6.707s 00:26:38.609 11:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:38.609 11:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:26:38.609 ************************************ 00:26:38.609 END TEST nvmf_ns_masking 00:26:38.609 ************************************ 00:26:38.609 11:35:07 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:26:38.609 11:35:07 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:38.609 11:35:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:38.609 11:35:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:38.609 11:35:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:38.609 ************************************ 00:26:38.609 START TEST nvmf_nvme_cli 00:26:38.609 ************************************ 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:26:38.609 * Looking for test storage... 00:26:38.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.609 11:35:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:45.203 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:45.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:45.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:45.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.203 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.204 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.204 11:35:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.204 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.204 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.204 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.204 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:26:45.465 00:26:45.465 --- 10.0.0.2 ping statistics --- 00:26:45.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.465 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.523 ms 00:26:45.465 00:26:45.465 --- 10.0.0.1 ping statistics --- 00:26:45.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.465 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2233067 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2233067 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2233067 ']' 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:45.465 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.466 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:45.466 11:35:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:45.466 [2024-06-10 11:35:14.367731] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:45.466 [2024-06-10 11:35:14.367799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.466 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.727 [2024-06-10 11:35:14.438233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.727 [2024-06-10 11:35:14.513630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.727 [2024-06-10 11:35:14.513677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.727 [2024-06-10 11:35:14.513688] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.727 [2024-06-10 11:35:14.513694] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.727 [2024-06-10 11:35:14.513700] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.727 [2024-06-10 11:35:14.513873] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.727 [2024-06-10 11:35:14.513992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.727 [2024-06-10 11:35:14.514152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.727 [2024-06-10 11:35:14.514153] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.298 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:46.298 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:26:46.298 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.298 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:46.298 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.559 [2024-06-10 11:35:15.293567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.559 Malloc0 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:46.559 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 Malloc1 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 [2024-06-10 11:35:15.383429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:26:46.560 00:26:46.560 Discovery Log Number of Records 2, Generation counter 2 00:26:46.560 =====Discovery Log Entry 0====== 00:26:46.560 trtype: tcp 00:26:46.560 adrfam: ipv4 00:26:46.560 subtype: current discovery subsystem 00:26:46.560 treq: not required 00:26:46.560 portid: 0 00:26:46.560 trsvcid: 4420 00:26:46.560 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:46.560 traddr: 10.0.0.2 00:26:46.560 eflags: explicit discovery connections, duplicate discovery information 00:26:46.560 sectype: none 00:26:46.560 =====Discovery Log Entry 1====== 00:26:46.560 trtype: tcp 00:26:46.560 adrfam: ipv4 00:26:46.560 subtype: nvme subsystem 00:26:46.560 treq: not required 00:26:46.560 portid: 0 00:26:46.560 trsvcid: 4420 00:26:46.560 subnqn: nqn.2016-06.io.spdk:cnode1 00:26:46.560 traddr: 10.0.0.2 00:26:46.560 eflags: none 00:26:46.560 sectype: none 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:26:46.560 11:35:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:26:48.471 11:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:26:50.384 /dev/nvme0n1 ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:50.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:50.384 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.385 rmmod nvme_tcp 00:26:50.385 rmmod nvme_fabrics 00:26:50.385 rmmod nvme_keyring 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2233067 ']' 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2233067 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2233067 ']' 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2233067 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:50.385 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2233067 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2233067' 00:26:50.646 killing process with pid 2233067 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2233067 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2233067 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.646 11:35:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.191 11:35:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.191 00:26:53.191 real 0m14.529s 00:26:53.191 user 0m22.161s 00:26:53.191 sys 0m5.844s 00:26:53.191 11:35:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:53.191 11:35:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:26:53.191 ************************************ 00:26:53.191 END TEST nvmf_nvme_cli 00:26:53.191 ************************************ 00:26:53.191 11:35:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:26:53.191 11:35:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:53.191 11:35:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:53.191 11:35:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:53.191 11:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.191 ************************************ 00:26:53.191 START TEST nvmf_vfio_user 00:26:53.191 ************************************ 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:26:53.191 * Looking for test storage... 00:26:53.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.191 11:35:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2234553 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2234553' 00:26:53.192 Process pid: 2234553 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2234553 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2234553 ']' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:53.192 11:35:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:26:53.192 [2024-06-10 11:35:21.872405] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:53.192 [2024-06-10 11:35:21.872455] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.192 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.192 [2024-06-10 11:35:21.933824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.192 [2024-06-10 11:35:22.000934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.192 [2024-06-10 11:35:22.000971] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.192 [2024-06-10 11:35:22.000978] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.192 [2024-06-10 11:35:22.000985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.192 [2024-06-10 11:35:22.000990] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.192 [2024-06-10 11:35:22.001101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.192 [2024-06-10 11:35:22.001219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.192 [2024-06-10 11:35:22.001376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.192 [2024-06-10 11:35:22.001377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.192 11:35:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:53.192 11:35:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:26:53.192 11:35:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:26:54.134 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:26:54.394 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:26:54.394 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:26:54.395 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:54.395 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:26:54.395 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:54.655 Malloc1 00:26:54.655 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:26:54.916 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:26:55.179 11:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:26:55.439 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:55.439 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:26:55.440 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:26:55.440 Malloc2 00:26:55.700 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:26:55.700 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:26:55.960 11:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:26:56.222 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:26:56.222 [2024-06-10 11:35:25.096240] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:26:56.222 [2024-06-10 11:35:25.096269] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235241 ] 00:26:56.222 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.222 [2024-06-10 11:35:25.125378] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:26:56.222 [2024-06-10 11:35:25.130712] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:56.222 [2024-06-10 11:35:25.130731] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6b470e3000 00:26:56.222 [2024-06-10 11:35:25.131704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.132715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.133715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.134724] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.135723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.136733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.137741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.138742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:26:56.222 [2024-06-10 11:35:25.139755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:26:56.222 [2024-06-10 11:35:25.139767] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6b470d8000 00:26:56.222 [2024-06-10 11:35:25.141092] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:56.222 [2024-06-10 11:35:25.161840] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:26:56.222 [2024-06-10 11:35:25.161866] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:26:56.222 [2024-06-10 11:35:25.164909] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:56.222 [2024-06-10 11:35:25.164952] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:26:56.222 [2024-06-10 11:35:25.165031] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:26:56.222 [2024-06-10 11:35:25.165048] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:26:56.222 [2024-06-10 11:35:25.165053] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:26:56.223 [2024-06-10 11:35:25.165906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:26:56.223 [2024-06-10 11:35:25.165915] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:26:56.223 [2024-06-10 11:35:25.165922] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:26:56.223 [2024-06-10 11:35:25.166910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:26:56.223 [2024-06-10 11:35:25.166918] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:26:56.223 [2024-06-10 11:35:25.166925] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.167914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:26:56.223 [2024-06-10 11:35:25.167921] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.168917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:26:56.223 [2024-06-10 11:35:25.168925] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:26:56.223 [2024-06-10 11:35:25.168929] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.168936] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.169041] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:26:56.223 [2024-06-10 11:35:25.169046] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.169053] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:26:56.223 [2024-06-10 11:35:25.169923] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:26:56.223 [2024-06-10 11:35:25.170927] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:26:56.223 [2024-06-10 11:35:25.171935] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:56.223 [2024-06-10 11:35:25.172937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:26:56.223 [2024-06-10 11:35:25.173015] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:56.223 [2024-06-10 11:35:25.173948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:26:56.223 [2024-06-10 11:35:25.173956] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:56.223 [2024-06-10 11:35:25.173960] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.173982] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:26:56.223 [2024-06-10 11:35:25.173989] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174004] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:56.223 [2024-06-10 11:35:25.174009] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:56.223 [2024-06-10 11:35:25.174022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174082] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:26:56.223 [2024-06-10 11:35:25.174087] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:26:56.223 [2024-06-10 11:35:25.174091] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:26:56.223 [2024-06-10 11:35:25.174095] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:26:56.223 [2024-06-10 11:35:25.174100] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:26:56.223 [2024-06-10 11:35:25.174105] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:26:56.223 [2024-06-10 11:35:25.174110] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174117] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.223 [2024-06-10 11:35:25.174169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.223 [2024-06-10 11:35:25.174177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.223 [2024-06-10 11:35:25.174185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.223 [2024-06-10 11:35:25.174190] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174197] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174225] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:26:56.223 [2024-06-10 11:35:25.174231] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174237] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174243] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174312] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174320] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174327] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:26:56.223 [2024-06-10 11:35:25.174332] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:26:56.223 [2024-06-10 11:35:25.174338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174361] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:26:56.223 [2024-06-10 11:35:25.174369] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174377] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174383] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:56.223 [2024-06-10 11:35:25.174388] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:56.223 [2024-06-10 11:35:25.174394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174422] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174429] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174436] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:26:56.223 [2024-06-10 11:35:25.174440] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:56.223 [2024-06-10 11:35:25.174446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:56.223 [2024-06-10 11:35:25.174459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:26:56.223 [2024-06-10 11:35:25.174467] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174473] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174480] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174486] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174490] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174495] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:26:56.223 [2024-06-10 11:35:25.174500] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:26:56.223 [2024-06-10 11:35:25.174505] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:26:56.224 [2024-06-10 11:35:25.174522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174609] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:26:56.224 [2024-06-10 11:35:25.174613] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:26:56.224 [2024-06-10 11:35:25.174617] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:26:56.224 [2024-06-10 11:35:25.174622] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:26:56.224 [2024-06-10 11:35:25.174629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:26:56.224 [2024-06-10 11:35:25.174636] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:26:56.224 [2024-06-10 11:35:25.174641] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:26:56.224 [2024-06-10 11:35:25.174647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174654] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:26:56.224 [2024-06-10 11:35:25.174658] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:26:56.224 [2024-06-10 11:35:25.174664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174676] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:26:56.224 [2024-06-10 11:35:25.174681] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:26:56.224 [2024-06-10 11:35:25.174686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:26:56.224 [2024-06-10 11:35:25.174693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:26:56.224 [2024-06-10 11:35:25.174725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:26:56.224 ===================================================== 00:26:56.224 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:26:56.224 ===================================================== 00:26:56.224 Controller Capabilities/Features 00:26:56.224 ================================ 00:26:56.224 Vendor ID: 4e58 00:26:56.224 Subsystem Vendor ID: 4e58 00:26:56.224 Serial Number: SPDK1 00:26:56.224 Model Number: SPDK bdev Controller 00:26:56.224 Firmware Version: 24.09 00:26:56.224 Recommended Arb Burst: 6 00:26:56.224 IEEE OUI Identifier: 8d 6b 50 00:26:56.224 Multi-path I/O 00:26:56.224 May have multiple subsystem ports: Yes 00:26:56.224 May have multiple controllers: Yes 00:26:56.224 Associated with SR-IOV VF: No 00:26:56.224 Max Data Transfer Size: 131072 00:26:56.224 Max Number of Namespaces: 32 00:26:56.224 Max Number of I/O Queues: 127 00:26:56.224 NVMe Specification Version (VS): 1.3 00:26:56.224 NVMe Specification Version (Identify): 1.3 00:26:56.224 Maximum Queue Entries: 256 00:26:56.224 Contiguous Queues Required: Yes 00:26:56.224 Arbitration Mechanisms Supported 00:26:56.224 Weighted Round Robin: Not Supported 00:26:56.224 Vendor Specific: Not Supported 00:26:56.224 Reset Timeout: 15000 ms 00:26:56.224 Doorbell Stride: 4 bytes 00:26:56.224 NVM Subsystem Reset: Not Supported 00:26:56.224 Command Sets Supported 00:26:56.224 NVM Command Set: Supported 00:26:56.224 Boot Partition: Not Supported 00:26:56.224 Memory Page Size Minimum: 4096 bytes 00:26:56.224 Memory Page Size Maximum: 4096 bytes 00:26:56.224 Persistent Memory Region: Not Supported 00:26:56.224 Optional Asynchronous Events Supported 00:26:56.224 Namespace Attribute Notices: Supported 00:26:56.224 Firmware Activation Notices: Not Supported 00:26:56.224 ANA Change Notices: Not Supported 00:26:56.224 PLE Aggregate Log Change Notices: Not Supported 00:26:56.224 LBA Status Info Alert Notices: Not Supported 00:26:56.224 EGE Aggregate Log Change Notices: Not Supported 00:26:56.224 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.224 Zone Descriptor Change Notices: Not Supported 00:26:56.224 Discovery Log Change Notices: Not Supported 00:26:56.224 Controller Attributes 00:26:56.224 128-bit Host Identifier: Supported 00:26:56.224 Non-Operational Permissive Mode: Not Supported 00:26:56.224 NVM Sets: Not Supported 00:26:56.224 Read Recovery Levels: Not Supported 00:26:56.224 Endurance Groups: Not Supported 00:26:56.224 Predictable Latency Mode: Not Supported 00:26:56.224 Traffic Based Keep ALive: Not Supported 00:26:56.224 Namespace Granularity: Not Supported 00:26:56.224 SQ Associations: Not Supported 00:26:56.224 UUID List: Not Supported 00:26:56.224 Multi-Domain Subsystem: Not Supported 00:26:56.224 Fixed Capacity Management: Not Supported 00:26:56.224 Variable Capacity Management: Not Supported 00:26:56.224 Delete Endurance Group: Not Supported 00:26:56.224 Delete NVM Set: Not Supported 00:26:56.224 Extended LBA Formats Supported: Not Supported 00:26:56.224 Flexible Data Placement Supported: Not Supported 00:26:56.224 00:26:56.224 Controller Memory Buffer Support 00:26:56.224 ================================ 00:26:56.224 Supported: No 00:26:56.224 00:26:56.224 Persistent Memory Region Support 00:26:56.224 ================================ 00:26:56.224 Supported: No 00:26:56.224 00:26:56.224 Admin Command Set Attributes 00:26:56.224 ============================ 00:26:56.224 Security Send/Receive: Not Supported 00:26:56.224 Format NVM: Not Supported 00:26:56.224 Firmware Activate/Download: Not Supported 00:26:56.224 Namespace Management: Not Supported 00:26:56.224 Device Self-Test: Not Supported 00:26:56.224 Directives: Not Supported 00:26:56.224 NVMe-MI: Not Supported 00:26:56.224 Virtualization Management: Not Supported 00:26:56.224 Doorbell Buffer Config: Not Supported 00:26:56.224 Get LBA Status Capability: Not Supported 00:26:56.224 Command & Feature Lockdown Capability: Not Supported 00:26:56.224 Abort Command Limit: 4 00:26:56.224 Async Event Request Limit: 4 00:26:56.224 Number of Firmware Slots: N/A 00:26:56.224 Firmware Slot 1 Read-Only: N/A 00:26:56.224 Firmware Activation Without Reset: N/A 00:26:56.224 Multiple Update Detection Support: N/A 00:26:56.224 Firmware Update Granularity: No Information Provided 00:26:56.224 Per-Namespace SMART Log: No 00:26:56.224 Asymmetric Namespace Access Log Page: Not Supported 00:26:56.224 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:26:56.224 Command Effects Log Page: Supported 00:26:56.224 Get Log Page Extended Data: Supported 00:26:56.224 Telemetry Log Pages: Not Supported 00:26:56.224 Persistent Event Log Pages: Not Supported 00:26:56.224 Supported Log Pages Log Page: May Support 00:26:56.224 Commands Supported & Effects Log Page: Not Supported 00:26:56.224 Feature Identifiers & Effects Log Page:May Support 00:26:56.224 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.224 Data Area 4 for Telemetry Log: Not Supported 00:26:56.224 Error Log Page Entries Supported: 128 00:26:56.224 Keep Alive: Supported 00:26:56.224 Keep Alive Granularity: 10000 ms 00:26:56.224 00:26:56.224 NVM Command Set Attributes 00:26:56.224 ========================== 00:26:56.224 Submission Queue Entry Size 00:26:56.224 Max: 64 00:26:56.224 Min: 64 00:26:56.224 Completion Queue Entry Size 00:26:56.224 Max: 16 00:26:56.224 Min: 16 00:26:56.224 Number of Namespaces: 32 00:26:56.224 Compare Command: Supported 00:26:56.224 Write Uncorrectable Command: Not Supported 00:26:56.224 Dataset Management Command: Supported 00:26:56.224 Write Zeroes Command: Supported 00:26:56.224 Set Features Save Field: Not Supported 00:26:56.224 Reservations: Not Supported 00:26:56.224 Timestamp: Not Supported 00:26:56.224 Copy: Supported 00:26:56.224 Volatile Write Cache: Present 00:26:56.224 Atomic Write Unit (Normal): 1 00:26:56.224 Atomic Write Unit (PFail): 1 00:26:56.224 Atomic Compare & Write Unit: 1 00:26:56.224 Fused Compare & Write: Supported 00:26:56.224 Scatter-Gather List 00:26:56.224 SGL Command Set: Supported (Dword aligned) 00:26:56.224 SGL Keyed: Not Supported 00:26:56.224 SGL Bit Bucket Descriptor: Not Supported 00:26:56.224 SGL Metadata Pointer: Not Supported 00:26:56.224 Oversized SGL: Not Supported 00:26:56.224 SGL Metadata Address: Not Supported 00:26:56.225 SGL Offset: Not Supported 00:26:56.225 Transport SGL Data Block: Not Supported 00:26:56.225 Replay Protected Memory Block: Not Supported 00:26:56.225 00:26:56.225 Firmware Slot Information 00:26:56.225 ========================= 00:26:56.225 Active slot: 1 00:26:56.225 Slot 1 Firmware Revision: 24.09 00:26:56.225 00:26:56.225 00:26:56.225 Commands Supported and Effects 00:26:56.225 ============================== 00:26:56.225 Admin Commands 00:26:56.225 -------------- 00:26:56.225 Get Log Page (02h): Supported 00:26:56.225 Identify (06h): Supported 00:26:56.225 Abort (08h): Supported 00:26:56.225 Set Features (09h): Supported 00:26:56.225 Get Features (0Ah): Supported 00:26:56.225 Asynchronous Event Request (0Ch): Supported 00:26:56.225 Keep Alive (18h): Supported 00:26:56.225 I/O Commands 00:26:56.225 ------------ 00:26:56.225 Flush (00h): Supported LBA-Change 00:26:56.225 Write (01h): Supported LBA-Change 00:26:56.225 Read (02h): Supported 00:26:56.225 Compare (05h): Supported 00:26:56.225 Write Zeroes (08h): Supported LBA-Change 00:26:56.225 Dataset Management (09h): Supported LBA-Change 00:26:56.225 Copy (19h): Supported LBA-Change 00:26:56.225 Unknown (79h): Supported LBA-Change 00:26:56.225 Unknown (7Ah): Supported 00:26:56.225 00:26:56.225 Error Log 00:26:56.225 ========= 00:26:56.225 00:26:56.225 Arbitration 00:26:56.225 =========== 00:26:56.225 Arbitration Burst: 1 00:26:56.225 00:26:56.225 Power Management 00:26:56.225 ================ 00:26:56.225 Number of Power States: 1 00:26:56.225 Current Power State: Power State #0 00:26:56.225 Power State #0: 00:26:56.225 Max Power: 0.00 W 00:26:56.225 Non-Operational State: Operational 00:26:56.225 Entry Latency: Not Reported 00:26:56.225 Exit Latency: Not Reported 00:26:56.225 Relative Read Throughput: 0 00:26:56.225 Relative Read Latency: 0 00:26:56.225 Relative Write Throughput: 0 00:26:56.225 Relative Write Latency: 0 00:26:56.225 Idle Power: Not Reported 00:26:56.225 Active Power: Not Reported 00:26:56.225 Non-Operational Permissive Mode: Not Supported 00:26:56.225 00:26:56.225 Health Information 00:26:56.225 ================== 00:26:56.225 Critical Warnings: 00:26:56.225 Available Spare Space: OK 00:26:56.225 Temperature: OK 00:26:56.225 Device Reliability: OK 00:26:56.225 Read Only: No 00:26:56.225 Volatile Memory Backup: OK 00:26:56.225 Current Temperature: 0 Kelvin (-2[2024-06-10 11:35:25.174827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:26:56.225 [2024-06-10 11:35:25.174842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:26:56.225 [2024-06-10 11:35:25.174868] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:26:56.225 [2024-06-10 11:35:25.174877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-06-10 11:35:25.174884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-06-10 11:35:25.174890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-06-10 11:35:25.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.225 [2024-06-10 11:35:25.177677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:26:56.225 [2024-06-10 11:35:25.177687] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:26:56.225 [2024-06-10 11:35:25.177977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:26:56.225 [2024-06-10 11:35:25.178027] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:26:56.225 [2024-06-10 11:35:25.178034] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:26:56.225 [2024-06-10 11:35:25.178987] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:26:56.225 [2024-06-10 11:35:25.178998] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:26:56.225 [2024-06-10 11:35:25.179060] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:26:56.225 [2024-06-10 11:35:25.181014] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:26:56.486 73 Celsius) 00:26:56.486 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:56.486 Available Spare: 0% 00:26:56.486 Available Spare Threshold: 0% 00:26:56.486 Life Percentage Used: 0% 00:26:56.486 Data Units Read: 0 00:26:56.486 Data Units Written: 0 00:26:56.486 Host Read Commands: 0 00:26:56.486 Host Write Commands: 0 00:26:56.486 Controller Busy Time: 0 minutes 00:26:56.486 Power Cycles: 0 00:26:56.486 Power On Hours: 0 hours 00:26:56.486 Unsafe Shutdowns: 0 00:26:56.486 Unrecoverable Media Errors: 0 00:26:56.486 Lifetime Error Log Entries: 0 00:26:56.486 Warning Temperature Time: 0 minutes 00:26:56.486 Critical Temperature Time: 0 minutes 00:26:56.486 00:26:56.486 Number of Queues 00:26:56.486 ================ 00:26:56.486 Number of I/O Submission Queues: 127 00:26:56.486 Number of I/O Completion Queues: 127 00:26:56.486 00:26:56.486 Active Namespaces 00:26:56.486 ================= 00:26:56.486 Namespace ID:1 00:26:56.486 Error Recovery Timeout: Unlimited 00:26:56.486 Command Set Identifier: NVM (00h) 00:26:56.486 Deallocate: Supported 00:26:56.486 Deallocated/Unwritten Error: Not Supported 00:26:56.486 Deallocated Read Value: Unknown 00:26:56.486 Deallocate in Write Zeroes: Not Supported 00:26:56.486 Deallocated Guard Field: 0xFFFF 00:26:56.486 Flush: Supported 00:26:56.486 Reservation: Supported 00:26:56.486 Namespace Sharing Capabilities: Multiple Controllers 00:26:56.486 Size (in LBAs): 131072 (0GiB) 00:26:56.486 Capacity (in LBAs): 131072 (0GiB) 00:26:56.486 Utilization (in LBAs): 131072 (0GiB) 00:26:56.486 NGUID: 2EBBC0E823C440D4A2E69A9FAF9B9D0F 00:26:56.486 UUID: 2ebbc0e8-23c4-40d4-a2e6-9a9faf9b9d0f 00:26:56.486 Thin Provisioning: Not Supported 00:26:56.486 Per-NS Atomic Units: Yes 00:26:56.486 Atomic Boundary Size (Normal): 0 00:26:56.486 Atomic Boundary Size (PFail): 0 00:26:56.486 Atomic Boundary Offset: 0 00:26:56.486 Maximum Single Source Range Length: 65535 00:26:56.486 Maximum Copy Length: 65535 00:26:56.486 Maximum Source Range Count: 1 00:26:56.486 NGUID/EUI64 Never Reused: No 00:26:56.486 Namespace Write Protected: No 00:26:56.486 Number of LBA Formats: 1 00:26:56.486 Current LBA Format: LBA Format #00 00:26:56.486 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:56.486 00:26:56.486 11:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:26:56.486 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.486 [2024-06-10 11:35:25.363306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:01.772 Initializing NVMe Controllers 00:27:01.772 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:01.772 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:27:01.772 Initialization complete. Launching workers. 00:27:01.772 ======================================================== 00:27:01.772 Latency(us) 00:27:01.772 Device Information : IOPS MiB/s Average min max 00:27:01.772 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35135.68 137.25 3642.45 1194.74 9001.31 00:27:01.772 ======================================================== 00:27:01.772 Total : 35135.68 137.25 3642.45 1194.74 9001.31 00:27:01.772 00:27:01.772 [2024-06-10 11:35:30.385639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:01.772 11:35:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:27:01.772 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.772 [2024-06-10 11:35:30.588640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:07.076 Initializing NVMe Controllers 00:27:07.076 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:07.076 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:27:07.076 Initialization complete. Launching workers. 00:27:07.076 ======================================================== 00:27:07.076 Latency(us) 00:27:07.076 Device Information : IOPS MiB/s Average min max 00:27:07.076 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.92 62.71 7978.88 4988.07 10973.61 00:27:07.076 ======================================================== 00:27:07.076 Total : 16052.92 62.71 7978.88 4988.07 10973.61 00:27:07.076 00:27:07.076 [2024-06-10 11:35:35.628479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:07.076 11:35:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:27:07.076 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.076 [2024-06-10 11:35:35.853503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:12.361 [2024-06-10 11:35:40.918871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:12.361 Initializing NVMe Controllers 00:27:12.361 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:12.361 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:27:12.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:27:12.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:27:12.361 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:27:12.361 Initialization complete. Launching workers. 00:27:12.361 Starting thread on core 2 00:27:12.361 Starting thread on core 3 00:27:12.361 Starting thread on core 1 00:27:12.361 11:35:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:27:12.361 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.361 [2024-06-10 11:35:41.192001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:15.674 [2024-06-10 11:35:44.257929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:15.674 Initializing NVMe Controllers 00:27:15.674 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:15.674 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:15.674 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:27:15.674 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:27:15.674 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:27:15.674 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:27:15.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:15.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:15.674 Initialization complete. Launching workers. 00:27:15.674 Starting thread on core 1 with urgent priority queue 00:27:15.674 Starting thread on core 2 with urgent priority queue 00:27:15.674 Starting thread on core 3 with urgent priority queue 00:27:15.674 Starting thread on core 0 with urgent priority queue 00:27:15.674 SPDK bdev Controller (SPDK1 ) core 0: 5302.00 IO/s 18.86 secs/100000 ios 00:27:15.674 SPDK bdev Controller (SPDK1 ) core 1: 6230.00 IO/s 16.05 secs/100000 ios 00:27:15.674 SPDK bdev Controller (SPDK1 ) core 2: 7103.00 IO/s 14.08 secs/100000 ios 00:27:15.674 SPDK bdev Controller (SPDK1 ) core 3: 7398.33 IO/s 13.52 secs/100000 ios 00:27:15.674 ======================================================== 00:27:15.674 00:27:15.674 11:35:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:15.674 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.674 [2024-06-10 11:35:44.526332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:15.674 Initializing NVMe Controllers 00:27:15.674 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:15.674 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:15.674 Namespace ID: 1 size: 0GB 00:27:15.674 Initialization complete. 00:27:15.674 INFO: using host memory buffer for IO 00:27:15.674 Hello world! 00:27:15.674 [2024-06-10 11:35:44.557540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:15.674 11:35:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:27:15.934 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.934 [2024-06-10 11:35:44.821182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:16.876 Initializing NVMe Controllers 00:27:16.876 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:16.876 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:16.876 Initialization complete. Launching workers. 00:27:16.876 submit (in ns) avg, min, max = 7915.5, 3900.8, 4000120.0 00:27:16.876 complete (in ns) avg, min, max = 18507.3, 2382.5, 4002440.8 00:27:16.876 00:27:16.876 Submit histogram 00:27:16.876 ================ 00:27:16.876 Range in us Cumulative Count 00:27:16.876 3.893 - 3.920: 1.0862% ( 214) 00:27:16.876 3.920 - 3.947: 6.0908% ( 986) 00:27:16.876 3.947 - 3.973: 14.9122% ( 1738) 00:27:16.876 3.973 - 4.000: 25.5456% ( 2095) 00:27:16.876 4.000 - 4.027: 37.3363% ( 2323) 00:27:16.876 4.027 - 4.053: 49.1574% ( 2329) 00:27:16.876 4.053 - 4.080: 64.1001% ( 2944) 00:27:16.876 4.080 - 4.107: 78.0936% ( 2757) 00:27:16.876 4.107 - 4.133: 88.6458% ( 2079) 00:27:16.876 4.133 - 4.160: 94.7620% ( 1205) 00:27:16.876 4.160 - 4.187: 97.6804% ( 575) 00:27:16.876 4.187 - 4.213: 98.9087% ( 242) 00:27:16.876 4.213 - 4.240: 99.2590% ( 69) 00:27:16.876 4.240 - 4.267: 99.3909% ( 26) 00:27:16.876 4.267 - 4.293: 99.4315% ( 8) 00:27:16.876 4.293 - 4.320: 99.4518% ( 4) 00:27:16.876 4.347 - 4.373: 99.4569% ( 1) 00:27:16.876 4.453 - 4.480: 99.4620% ( 1) 00:27:16.876 4.587 - 4.613: 99.4671% ( 1) 00:27:16.876 4.747 - 4.773: 99.4721% ( 1) 00:27:16.876 4.800 - 4.827: 99.4772% ( 1) 00:27:16.876 4.853 - 4.880: 99.4823% ( 1) 00:27:16.877 5.120 - 5.147: 99.4874% ( 1) 00:27:16.877 5.173 - 5.200: 99.4924% ( 1) 00:27:16.877 5.253 - 5.280: 99.4975% ( 1) 00:27:16.877 5.307 - 5.333: 99.5026% ( 1) 00:27:16.877 5.653 - 5.680: 99.5077% ( 1) 00:27:16.877 5.733 - 5.760: 99.5127% ( 1) 00:27:16.877 5.813 - 5.840: 99.5178% ( 1) 00:27:16.877 5.973 - 6.000: 99.5229% ( 1) 00:27:16.877 6.053 - 6.080: 99.5330% ( 2) 00:27:16.877 6.133 - 6.160: 99.5483% ( 3) 00:27:16.877 6.187 - 6.213: 99.5533% ( 1) 00:27:16.877 6.213 - 6.240: 99.5635% ( 2) 00:27:16.877 6.240 - 6.267: 99.5686% ( 1) 00:27:16.877 6.293 - 6.320: 99.5736% ( 1) 00:27:16.877 6.320 - 6.347: 99.5838% ( 2) 00:27:16.877 6.347 - 6.373: 99.5990% ( 3) 00:27:16.877 6.373 - 6.400: 99.6143% ( 3) 00:27:16.877 6.400 - 6.427: 99.6244% ( 2) 00:27:16.877 6.427 - 6.453: 99.6346% ( 2) 00:27:16.877 6.453 - 6.480: 99.6447% ( 2) 00:27:16.877 6.480 - 6.507: 99.6599% ( 3) 00:27:16.877 6.507 - 6.533: 99.6701% ( 2) 00:27:16.877 6.533 - 6.560: 99.6752% ( 1) 00:27:16.877 6.613 - 6.640: 99.6853% ( 2) 00:27:16.877 6.640 - 6.667: 99.6904% ( 1) 00:27:16.877 6.667 - 6.693: 99.7158% ( 5) 00:27:16.877 6.693 - 6.720: 99.7208% ( 1) 00:27:16.877 6.747 - 6.773: 99.7259% ( 1) 00:27:16.877 6.773 - 6.800: 99.7310% ( 1) 00:27:16.877 6.800 - 6.827: 99.7361% ( 1) 00:27:16.877 6.933 - 6.987: 99.7462% ( 2) 00:27:16.877 6.987 - 7.040: 99.7513% ( 1) 00:27:16.877 7.093 - 7.147: 99.7564% ( 1) 00:27:16.877 7.200 - 7.253: 99.7614% ( 1) 00:27:16.877 7.360 - 7.413: 99.7665% ( 1) 00:27:16.877 7.413 - 7.467: 99.7767% ( 2) 00:27:16.877 7.467 - 7.520: 99.7868% ( 2) 00:27:16.877 7.573 - 7.627: 99.7919% ( 1) 00:27:16.877 7.627 - 7.680: 99.8021% ( 2) 00:27:16.877 7.680 - 7.733: 99.8071% ( 1) 00:27:16.877 7.893 - 7.947: 99.8122% ( 1) 00:27:16.877 7.947 - 8.000: 99.8173% ( 1) 00:27:16.877 8.213 - 8.267: 99.8224% ( 1) 00:27:16.877 8.267 - 8.320: 99.8274% ( 1) 00:27:16.877 8.373 - 8.427: 99.8325% ( 1) 00:27:16.877 8.427 - 8.480: 99.8376% ( 1) 00:27:16.877 8.480 - 8.533: 99.8477% ( 2) 00:27:16.877 8.693 - 8.747: 99.8528% ( 1) 00:27:16.877 8.853 - 8.907: 99.8630% ( 2) 00:27:16.877 8.907 - 8.960: 99.8680% ( 1) 00:27:16.877 8.960 - 9.013: 99.8731% ( 1) 00:27:16.877 9.013 - 9.067: 99.8833% ( 2) 00:27:16.877 9.120 - 9.173: 99.8883% ( 1) 00:27:16.877 12.800 - 12.853: 99.8934% ( 1) 00:27:16.877 13.013 - 13.067: 99.8985% ( 1) 00:27:16.877 13.867 - 13.973: 99.9036% ( 1) 00:27:16.877 3986.773 - 4014.080: 100.0000% ( 19) 00:27:16.877 00:27:16.877 [2024-06-10 11:35:45.843297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:17.138 Complete histogram 00:27:17.138 ================== 00:27:17.138 Range in us Cumulative Count 00:27:17.138 2.373 - 2.387: 0.0152% ( 3) 00:27:17.138 2.387 - 2.400: 0.0254% ( 2) 00:27:17.138 2.400 - 2.413: 2.0049% ( 390) 00:27:17.138 2.413 - 2.427: 2.1368% ( 26) 00:27:17.138 2.427 - 2.440: 2.5429% ( 80) 00:27:17.138 2.440 - 2.453: 2.9997% ( 90) 00:27:17.138 2.453 - 2.467: 48.7108% ( 9006) 00:27:17.138 2.467 - 2.480: 59.0701% ( 2041) 00:27:17.138 2.480 - 2.493: 76.0278% ( 3341) 00:27:17.138 2.493 - 2.507: 83.1083% ( 1395) 00:27:17.138 2.507 - 2.520: 85.1030% ( 393) 00:27:17.138 2.520 - 2.533: 88.7067% ( 710) 00:27:17.138 2.533 - 2.547: 93.5184% ( 948) 00:27:17.138 2.547 - 2.560: 96.4166% ( 571) 00:27:17.138 2.560 - 2.573: 98.1575% ( 343) 00:27:17.138 2.573 - 2.587: 99.0153% ( 169) 00:27:17.138 2.587 - 2.600: 99.2488% ( 46) 00:27:17.138 2.600 - 2.613: 99.3199% ( 14) 00:27:17.138 2.613 - 2.627: 99.3402% ( 4) 00:27:17.138 4.373 - 4.400: 99.3452% ( 1) 00:27:17.138 4.533 - 4.560: 99.3503% ( 1) 00:27:17.138 4.640 - 4.667: 99.3605% ( 2) 00:27:17.138 4.720 - 4.747: 99.3655% ( 1) 00:27:17.138 4.747 - 4.773: 99.3706% ( 1) 00:27:17.138 4.773 - 4.800: 99.3808% ( 2) 00:27:17.138 4.800 - 4.827: 99.3909% ( 2) 00:27:17.138 4.827 - 4.853: 99.4011% ( 2) 00:27:17.138 4.853 - 4.880: 99.4112% ( 2) 00:27:17.138 4.880 - 4.907: 99.4163% ( 1) 00:27:17.138 4.933 - 4.960: 99.4315% ( 3) 00:27:17.138 4.960 - 4.987: 99.4468% ( 3) 00:27:17.138 4.987 - 5.013: 99.4518% ( 1) 00:27:17.138 5.013 - 5.040: 99.4620% ( 2) 00:27:17.138 5.040 - 5.067: 99.4671% ( 1) 00:27:17.138 5.067 - 5.093: 99.4721% ( 1) 00:27:17.138 5.093 - 5.120: 99.4823% ( 2) 00:27:17.138 5.120 - 5.147: 99.4874% ( 1) 00:27:17.138 5.200 - 5.227: 99.4975% ( 2) 00:27:17.138 5.253 - 5.280: 99.5026% ( 1) 00:27:17.138 5.547 - 5.573: 99.5077% ( 1) 00:27:17.138 6.053 - 6.080: 99.5127% ( 1) 00:27:17.138 6.187 - 6.213: 99.5178% ( 1) 00:27:17.138 6.213 - 6.240: 99.5229% ( 1) 00:27:17.138 6.267 - 6.293: 99.5280% ( 1) 00:27:17.138 6.293 - 6.320: 99.5330% ( 1) 00:27:17.138 6.320 - 6.347: 99.5381% ( 1) 00:27:17.138 6.347 - 6.373: 99.5432% ( 1) 00:27:17.138 6.587 - 6.613: 99.5483% ( 1) 00:27:17.138 6.800 - 6.827: 99.5533% ( 1) 00:27:17.138 6.933 - 6.987: 99.5635% ( 2) 00:27:17.138 6.987 - 7.040: 99.5686% ( 1) 00:27:17.138 7.360 - 7.413: 99.5736% ( 1) 00:27:17.138 7.467 - 7.520: 99.5787% ( 1) 00:27:17.138 7.573 - 7.627: 99.5838% ( 1) 00:27:17.138 11.093 - 11.147: 99.5889% ( 1) 00:27:17.138 12.107 - 12.160: 99.5939% ( 1) 00:27:17.138 15.360 - 15.467: 99.5990% ( 1) 00:27:17.138 3986.773 - 4014.080: 100.0000% ( 79) 00:27:17.138 00:27:17.138 11:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:27:17.138 11:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:27:17.138 11:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:27:17.138 11:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:27:17.138 11:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:17.138 [ 00:27:17.138 { 00:27:17.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.138 "subtype": "Discovery", 00:27:17.138 "listen_addresses": [], 00:27:17.138 "allow_any_host": true, 00:27:17.138 "hosts": [] 00:27:17.138 }, 00:27:17.138 { 00:27:17.138 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:17.138 "subtype": "NVMe", 00:27:17.138 "listen_addresses": [ 00:27:17.138 { 00:27:17.138 "trtype": "VFIOUSER", 00:27:17.138 "adrfam": "IPv4", 00:27:17.138 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:17.138 "trsvcid": "0" 00:27:17.138 } 00:27:17.138 ], 00:27:17.138 "allow_any_host": true, 00:27:17.138 "hosts": [], 00:27:17.138 "serial_number": "SPDK1", 00:27:17.138 "model_number": "SPDK bdev Controller", 00:27:17.138 "max_namespaces": 32, 00:27:17.138 "min_cntlid": 1, 00:27:17.138 "max_cntlid": 65519, 00:27:17.138 "namespaces": [ 00:27:17.138 { 00:27:17.138 "nsid": 1, 00:27:17.138 "bdev_name": "Malloc1", 00:27:17.138 "name": "Malloc1", 00:27:17.138 "nguid": "2EBBC0E823C440D4A2E69A9FAF9B9D0F", 00:27:17.138 "uuid": "2ebbc0e8-23c4-40d4-a2e6-9a9faf9b9d0f" 00:27:17.138 } 00:27:17.138 ] 00:27:17.138 }, 00:27:17.138 { 00:27:17.138 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:17.138 "subtype": "NVMe", 00:27:17.138 "listen_addresses": [ 00:27:17.138 { 00:27:17.138 "trtype": "VFIOUSER", 00:27:17.138 "adrfam": "IPv4", 00:27:17.138 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:17.138 "trsvcid": "0" 00:27:17.138 } 00:27:17.138 ], 00:27:17.138 "allow_any_host": true, 00:27:17.138 "hosts": [], 00:27:17.138 "serial_number": "SPDK2", 00:27:17.138 "model_number": "SPDK bdev Controller", 00:27:17.138 "max_namespaces": 32, 00:27:17.138 "min_cntlid": 1, 00:27:17.138 "max_cntlid": 65519, 00:27:17.138 "namespaces": [ 00:27:17.138 { 00:27:17.138 "nsid": 1, 00:27:17.138 "bdev_name": "Malloc2", 00:27:17.138 "name": "Malloc2", 00:27:17.138 "nguid": "0BCF3A9B28294462A3BADEC04459EB6E", 00:27:17.138 "uuid": "0bcf3a9b-2829-4462-a3ba-dec04459eb6e" 00:27:17.138 } 00:27:17.138 ] 00:27:17.138 } 00:27:17.138 ] 00:27:17.138 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2239337 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:27:17.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.399 [2024-06-10 11:35:46.272276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:27:17.399 Malloc3 00:27:17.399 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:27:17.659 [2024-06-10 11:35:46.546347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:27:17.659 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:17.659 Asynchronous Event Request test 00:27:17.659 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:27:17.659 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:27:17.659 Registering asynchronous event callbacks... 00:27:17.659 Starting namespace attribute notice tests for all controllers... 00:27:17.659 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:17.659 aer_cb - Changed Namespace 00:27:17.659 Cleaning up... 00:27:17.920 [ 00:27:17.920 { 00:27:17.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:17.920 "subtype": "Discovery", 00:27:17.920 "listen_addresses": [], 00:27:17.920 "allow_any_host": true, 00:27:17.920 "hosts": [] 00:27:17.920 }, 00:27:17.920 { 00:27:17.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:17.920 "subtype": "NVMe", 00:27:17.920 "listen_addresses": [ 00:27:17.920 { 00:27:17.920 "trtype": "VFIOUSER", 00:27:17.920 "adrfam": "IPv4", 00:27:17.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:17.920 "trsvcid": "0" 00:27:17.920 } 00:27:17.920 ], 00:27:17.920 "allow_any_host": true, 00:27:17.920 "hosts": [], 00:27:17.920 "serial_number": "SPDK1", 00:27:17.920 "model_number": "SPDK bdev Controller", 00:27:17.920 "max_namespaces": 32, 00:27:17.920 "min_cntlid": 1, 00:27:17.920 "max_cntlid": 65519, 00:27:17.920 "namespaces": [ 00:27:17.920 { 00:27:17.920 "nsid": 1, 00:27:17.920 "bdev_name": "Malloc1", 00:27:17.920 "name": "Malloc1", 00:27:17.920 "nguid": "2EBBC0E823C440D4A2E69A9FAF9B9D0F", 00:27:17.920 "uuid": "2ebbc0e8-23c4-40d4-a2e6-9a9faf9b9d0f" 00:27:17.920 }, 00:27:17.920 { 00:27:17.920 "nsid": 2, 00:27:17.920 "bdev_name": "Malloc3", 00:27:17.920 "name": "Malloc3", 00:27:17.920 "nguid": "C7E103723DDA42B887EEBCEBAD454775", 00:27:17.920 "uuid": "c7e10372-3dda-42b8-87ee-bcebad454775" 00:27:17.920 } 00:27:17.920 ] 00:27:17.920 }, 00:27:17.920 { 00:27:17.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:17.920 "subtype": "NVMe", 00:27:17.920 "listen_addresses": [ 00:27:17.920 { 00:27:17.920 "trtype": "VFIOUSER", 00:27:17.920 "adrfam": "IPv4", 00:27:17.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:17.920 "trsvcid": "0" 00:27:17.920 } 00:27:17.920 ], 00:27:17.920 "allow_any_host": true, 00:27:17.920 "hosts": [], 00:27:17.920 "serial_number": "SPDK2", 00:27:17.920 "model_number": "SPDK bdev Controller", 00:27:17.920 "max_namespaces": 32, 00:27:17.920 "min_cntlid": 1, 00:27:17.920 "max_cntlid": 65519, 00:27:17.920 "namespaces": [ 00:27:17.920 { 00:27:17.920 "nsid": 1, 00:27:17.920 "bdev_name": "Malloc2", 00:27:17.920 "name": "Malloc2", 00:27:17.920 "nguid": "0BCF3A9B28294462A3BADEC04459EB6E", 00:27:17.920 "uuid": "0bcf3a9b-2829-4462-a3ba-dec04459eb6e" 00:27:17.920 } 00:27:17.920 ] 00:27:17.920 } 00:27:17.920 ] 00:27:17.920 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2239337 00:27:17.920 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:17.920 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:17.920 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:27:17.920 11:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:27:17.920 [2024-06-10 11:35:46.818288] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:17.920 [2024-06-10 11:35:46.818350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239604 ] 00:27:17.920 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.920 [2024-06-10 11:35:46.854985] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:27:17.920 [2024-06-10 11:35:46.862889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:17.920 [2024-06-10 11:35:46.862910] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7feca3b58000 00:27:17.920 [2024-06-10 11:35:46.863896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.864904] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.865908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.866919] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.867920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.868929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.869931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.870945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:27:17.920 [2024-06-10 11:35:46.871956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:27:17.920 [2024-06-10 11:35:46.871970] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7feca3b4d000 00:27:17.920 [2024-06-10 11:35:46.873294] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:17.920 [2024-06-10 11:35:46.890496] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:27:17.920 [2024-06-10 11:35:46.890520] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:27:18.182 [2024-06-10 11:35:46.895621] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:18.182 [2024-06-10 11:35:46.895665] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:27:18.182 [2024-06-10 11:35:46.895751] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:27:18.182 [2024-06-10 11:35:46.895767] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:27:18.182 [2024-06-10 11:35:46.895773] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:27:18.182 [2024-06-10 11:35:46.896615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:27:18.183 [2024-06-10 11:35:46.896624] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:27:18.183 [2024-06-10 11:35:46.896631] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:27:18.183 [2024-06-10 11:35:46.897626] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:27:18.183 [2024-06-10 11:35:46.897635] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:27:18.183 [2024-06-10 11:35:46.897643] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.898631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:27:18.183 [2024-06-10 11:35:46.898640] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.899638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:27:18.183 [2024-06-10 11:35:46.899646] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:27:18.183 [2024-06-10 11:35:46.899651] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.899657] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.899763] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:27:18.183 [2024-06-10 11:35:46.899768] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.899772] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:27:18.183 [2024-06-10 11:35:46.900644] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:27:18.183 [2024-06-10 11:35:46.901652] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:27:18.183 [2024-06-10 11:35:46.902665] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:18.183 [2024-06-10 11:35:46.903667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:18.183 [2024-06-10 11:35:46.903711] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:18.183 [2024-06-10 11:35:46.904690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:27:18.183 [2024-06-10 11:35:46.904699] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:18.183 [2024-06-10 11:35:46.904704] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.904725] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:27:18.183 [2024-06-10 11:35:46.904733] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.904747] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:18.183 [2024-06-10 11:35:46.904752] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:18.183 [2024-06-10 11:35:46.904764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.913677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.913688] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:27:18.183 [2024-06-10 11:35:46.913692] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:27:18.183 [2024-06-10 11:35:46.913699] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:27:18.183 [2024-06-10 11:35:46.913704] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:27:18.183 [2024-06-10 11:35:46.913709] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:27:18.183 [2024-06-10 11:35:46.913713] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:27:18.183 [2024-06-10 11:35:46.913718] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.913725] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.913737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.921675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.921690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.183 [2024-06-10 11:35:46.921699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.183 [2024-06-10 11:35:46.921707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.183 [2024-06-10 11:35:46.921715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.183 [2024-06-10 11:35:46.921720] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.921727] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.921736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.929674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.929684] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:27:18.183 [2024-06-10 11:35:46.929690] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.929696] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.929702] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.929710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.937676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.937728] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.937737] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.937746] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:27:18.183 [2024-06-10 11:35:46.937751] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:27:18.183 [2024-06-10 11:35:46.937757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.945675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.945686] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:27:18.183 [2024-06-10 11:35:46.945698] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.945706] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.945712] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:18.183 [2024-06-10 11:35:46.945717] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:18.183 [2024-06-10 11:35:46.945723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.953676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.953690] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.953698] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.953705] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:27:18.183 [2024-06-10 11:35:46.953710] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:18.183 [2024-06-10 11:35:46.953716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:18.183 [2024-06-10 11:35:46.961675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:27:18.183 [2024-06-10 11:35:46.961685] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:18.183 [2024-06-10 11:35:46.961691] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:27:18.184 [2024-06-10 11:35:46.961701] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:27:18.184 [2024-06-10 11:35:46.961707] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:18.184 [2024-06-10 11:35:46.961712] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:27:18.184 [2024-06-10 11:35:46.961717] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:27:18.184 [2024-06-10 11:35:46.961721] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:27:18.184 [2024-06-10 11:35:46.961726] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:27:18.184 [2024-06-10 11:35:46.961742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.969675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:46.969689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.977676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:46.977689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.985676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:46.985688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.993674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:46.993690] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:27:18.184 [2024-06-10 11:35:46.993695] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:27:18.184 [2024-06-10 11:35:46.993698] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:27:18.184 [2024-06-10 11:35:46.993702] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:27:18.184 [2024-06-10 11:35:46.993708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:27:18.184 [2024-06-10 11:35:46.993715] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:27:18.184 [2024-06-10 11:35:46.993720] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:27:18.184 [2024-06-10 11:35:46.993726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.993733] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:27:18.184 [2024-06-10 11:35:46.993737] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:27:18.184 [2024-06-10 11:35:46.993743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:46.993750] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:27:18.184 [2024-06-10 11:35:46.993755] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:27:18.184 [2024-06-10 11:35:46.993761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:27:18.184 [2024-06-10 11:35:47.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:47.001689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:47.001699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:27:18.184 [2024-06-10 11:35:47.001706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:27:18.184 ===================================================== 00:27:18.184 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:18.184 ===================================================== 00:27:18.184 Controller Capabilities/Features 00:27:18.184 ================================ 00:27:18.184 Vendor ID: 4e58 00:27:18.184 Subsystem Vendor ID: 4e58 00:27:18.184 Serial Number: SPDK2 00:27:18.184 Model Number: SPDK bdev Controller 00:27:18.184 Firmware Version: 24.09 00:27:18.184 Recommended Arb Burst: 6 00:27:18.184 IEEE OUI Identifier: 8d 6b 50 00:27:18.184 Multi-path I/O 00:27:18.184 May have multiple subsystem ports: Yes 00:27:18.184 May have multiple controllers: Yes 00:27:18.184 Associated with SR-IOV VF: No 00:27:18.184 Max Data Transfer Size: 131072 00:27:18.184 Max Number of Namespaces: 32 00:27:18.184 Max Number of I/O Queues: 127 00:27:18.184 NVMe Specification Version (VS): 1.3 00:27:18.184 NVMe Specification Version (Identify): 1.3 00:27:18.184 Maximum Queue Entries: 256 00:27:18.184 Contiguous Queues Required: Yes 00:27:18.184 Arbitration Mechanisms Supported 00:27:18.184 Weighted Round Robin: Not Supported 00:27:18.184 Vendor Specific: Not Supported 00:27:18.184 Reset Timeout: 15000 ms 00:27:18.184 Doorbell Stride: 4 bytes 00:27:18.184 NVM Subsystem Reset: Not Supported 00:27:18.184 Command Sets Supported 00:27:18.184 NVM Command Set: Supported 00:27:18.184 Boot Partition: Not Supported 00:27:18.184 Memory Page Size Minimum: 4096 bytes 00:27:18.184 Memory Page Size Maximum: 4096 bytes 00:27:18.184 Persistent Memory Region: Not Supported 00:27:18.184 Optional Asynchronous Events Supported 00:27:18.184 Namespace Attribute Notices: Supported 00:27:18.184 Firmware Activation Notices: Not Supported 00:27:18.184 ANA Change Notices: Not Supported 00:27:18.184 PLE Aggregate Log Change Notices: Not Supported 00:27:18.184 LBA Status Info Alert Notices: Not Supported 00:27:18.184 EGE Aggregate Log Change Notices: Not Supported 00:27:18.184 Normal NVM Subsystem Shutdown event: Not Supported 00:27:18.184 Zone Descriptor Change Notices: Not Supported 00:27:18.184 Discovery Log Change Notices: Not Supported 00:27:18.184 Controller Attributes 00:27:18.184 128-bit Host Identifier: Supported 00:27:18.184 Non-Operational Permissive Mode: Not Supported 00:27:18.184 NVM Sets: Not Supported 00:27:18.184 Read Recovery Levels: Not Supported 00:27:18.184 Endurance Groups: Not Supported 00:27:18.184 Predictable Latency Mode: Not Supported 00:27:18.184 Traffic Based Keep ALive: Not Supported 00:27:18.184 Namespace Granularity: Not Supported 00:27:18.184 SQ Associations: Not Supported 00:27:18.184 UUID List: Not Supported 00:27:18.184 Multi-Domain Subsystem: Not Supported 00:27:18.184 Fixed Capacity Management: Not Supported 00:27:18.184 Variable Capacity Management: Not Supported 00:27:18.184 Delete Endurance Group: Not Supported 00:27:18.184 Delete NVM Set: Not Supported 00:27:18.184 Extended LBA Formats Supported: Not Supported 00:27:18.184 Flexible Data Placement Supported: Not Supported 00:27:18.184 00:27:18.184 Controller Memory Buffer Support 00:27:18.184 ================================ 00:27:18.184 Supported: No 00:27:18.184 00:27:18.184 Persistent Memory Region Support 00:27:18.184 ================================ 00:27:18.184 Supported: No 00:27:18.184 00:27:18.184 Admin Command Set Attributes 00:27:18.184 ============================ 00:27:18.184 Security Send/Receive: Not Supported 00:27:18.184 Format NVM: Not Supported 00:27:18.184 Firmware Activate/Download: Not Supported 00:27:18.184 Namespace Management: Not Supported 00:27:18.184 Device Self-Test: Not Supported 00:27:18.184 Directives: Not Supported 00:27:18.184 NVMe-MI: Not Supported 00:27:18.184 Virtualization Management: Not Supported 00:27:18.184 Doorbell Buffer Config: Not Supported 00:27:18.184 Get LBA Status Capability: Not Supported 00:27:18.184 Command & Feature Lockdown Capability: Not Supported 00:27:18.184 Abort Command Limit: 4 00:27:18.184 Async Event Request Limit: 4 00:27:18.184 Number of Firmware Slots: N/A 00:27:18.184 Firmware Slot 1 Read-Only: N/A 00:27:18.184 Firmware Activation Without Reset: N/A 00:27:18.184 Multiple Update Detection Support: N/A 00:27:18.184 Firmware Update Granularity: No Information Provided 00:27:18.184 Per-Namespace SMART Log: No 00:27:18.184 Asymmetric Namespace Access Log Page: Not Supported 00:27:18.184 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:27:18.184 Command Effects Log Page: Supported 00:27:18.184 Get Log Page Extended Data: Supported 00:27:18.184 Telemetry Log Pages: Not Supported 00:27:18.184 Persistent Event Log Pages: Not Supported 00:27:18.184 Supported Log Pages Log Page: May Support 00:27:18.184 Commands Supported & Effects Log Page: Not Supported 00:27:18.184 Feature Identifiers & Effects Log Page:May Support 00:27:18.184 NVMe-MI Commands & Effects Log Page: May Support 00:27:18.184 Data Area 4 for Telemetry Log: Not Supported 00:27:18.184 Error Log Page Entries Supported: 128 00:27:18.184 Keep Alive: Supported 00:27:18.184 Keep Alive Granularity: 10000 ms 00:27:18.184 00:27:18.184 NVM Command Set Attributes 00:27:18.184 ========================== 00:27:18.184 Submission Queue Entry Size 00:27:18.184 Max: 64 00:27:18.184 Min: 64 00:27:18.184 Completion Queue Entry Size 00:27:18.184 Max: 16 00:27:18.184 Min: 16 00:27:18.184 Number of Namespaces: 32 00:27:18.184 Compare Command: Supported 00:27:18.184 Write Uncorrectable Command: Not Supported 00:27:18.184 Dataset Management Command: Supported 00:27:18.184 Write Zeroes Command: Supported 00:27:18.184 Set Features Save Field: Not Supported 00:27:18.184 Reservations: Not Supported 00:27:18.184 Timestamp: Not Supported 00:27:18.185 Copy: Supported 00:27:18.185 Volatile Write Cache: Present 00:27:18.185 Atomic Write Unit (Normal): 1 00:27:18.185 Atomic Write Unit (PFail): 1 00:27:18.185 Atomic Compare & Write Unit: 1 00:27:18.185 Fused Compare & Write: Supported 00:27:18.185 Scatter-Gather List 00:27:18.185 SGL Command Set: Supported (Dword aligned) 00:27:18.185 SGL Keyed: Not Supported 00:27:18.185 SGL Bit Bucket Descriptor: Not Supported 00:27:18.185 SGL Metadata Pointer: Not Supported 00:27:18.185 Oversized SGL: Not Supported 00:27:18.185 SGL Metadata Address: Not Supported 00:27:18.185 SGL Offset: Not Supported 00:27:18.185 Transport SGL Data Block: Not Supported 00:27:18.185 Replay Protected Memory Block: Not Supported 00:27:18.185 00:27:18.185 Firmware Slot Information 00:27:18.185 ========================= 00:27:18.185 Active slot: 1 00:27:18.185 Slot 1 Firmware Revision: 24.09 00:27:18.185 00:27:18.185 00:27:18.185 Commands Supported and Effects 00:27:18.185 ============================== 00:27:18.185 Admin Commands 00:27:18.185 -------------- 00:27:18.185 Get Log Page (02h): Supported 00:27:18.185 Identify (06h): Supported 00:27:18.185 Abort (08h): Supported 00:27:18.185 Set Features (09h): Supported 00:27:18.185 Get Features (0Ah): Supported 00:27:18.185 Asynchronous Event Request (0Ch): Supported 00:27:18.185 Keep Alive (18h): Supported 00:27:18.185 I/O Commands 00:27:18.185 ------------ 00:27:18.185 Flush (00h): Supported LBA-Change 00:27:18.185 Write (01h): Supported LBA-Change 00:27:18.185 Read (02h): Supported 00:27:18.185 Compare (05h): Supported 00:27:18.185 Write Zeroes (08h): Supported LBA-Change 00:27:18.185 Dataset Management (09h): Supported LBA-Change 00:27:18.185 Copy (19h): Supported LBA-Change 00:27:18.185 Unknown (79h): Supported LBA-Change 00:27:18.185 Unknown (7Ah): Supported 00:27:18.185 00:27:18.185 Error Log 00:27:18.185 ========= 00:27:18.185 00:27:18.185 Arbitration 00:27:18.185 =========== 00:27:18.185 Arbitration Burst: 1 00:27:18.185 00:27:18.185 Power Management 00:27:18.185 ================ 00:27:18.185 Number of Power States: 1 00:27:18.185 Current Power State: Power State #0 00:27:18.185 Power State #0: 00:27:18.185 Max Power: 0.00 W 00:27:18.185 Non-Operational State: Operational 00:27:18.185 Entry Latency: Not Reported 00:27:18.185 Exit Latency: Not Reported 00:27:18.185 Relative Read Throughput: 0 00:27:18.185 Relative Read Latency: 0 00:27:18.185 Relative Write Throughput: 0 00:27:18.185 Relative Write Latency: 0 00:27:18.185 Idle Power: Not Reported 00:27:18.185 Active Power: Not Reported 00:27:18.185 Non-Operational Permissive Mode: Not Supported 00:27:18.185 00:27:18.185 Health Information 00:27:18.185 ================== 00:27:18.185 Critical Warnings: 00:27:18.185 Available Spare Space: OK 00:27:18.185 Temperature: OK 00:27:18.185 Device Reliability: OK 00:27:18.185 Read Only: No 00:27:18.185 Volatile Memory Backup: OK 00:27:18.185 Current Temperature: 0 Kelvin (-2[2024-06-10 11:35:47.001805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:27:18.185 [2024-06-10 11:35:47.009675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:27:18.185 [2024-06-10 11:35:47.009706] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:27:18.185 [2024-06-10 11:35:47.009717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.185 [2024-06-10 11:35:47.009724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.185 [2024-06-10 11:35:47.009730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.185 [2024-06-10 11:35:47.009736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.185 [2024-06-10 11:35:47.009778] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:27:18.185 [2024-06-10 11:35:47.009788] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:27:18.185 [2024-06-10 11:35:47.010783] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:18.185 [2024-06-10 11:35:47.010833] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:27:18.185 [2024-06-10 11:35:47.010839] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:27:18.185 [2024-06-10 11:35:47.011786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:27:18.185 [2024-06-10 11:35:47.011797] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:27:18.185 [2024-06-10 11:35:47.011846] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:27:18.185 [2024-06-10 11:35:47.013225] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:27:18.185 73 Celsius) 00:27:18.185 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:18.185 Available Spare: 0% 00:27:18.185 Available Spare Threshold: 0% 00:27:18.185 Life Percentage Used: 0% 00:27:18.185 Data Units Read: 0 00:27:18.185 Data Units Written: 0 00:27:18.185 Host Read Commands: 0 00:27:18.185 Host Write Commands: 0 00:27:18.185 Controller Busy Time: 0 minutes 00:27:18.185 Power Cycles: 0 00:27:18.185 Power On Hours: 0 hours 00:27:18.185 Unsafe Shutdowns: 0 00:27:18.185 Unrecoverable Media Errors: 0 00:27:18.185 Lifetime Error Log Entries: 0 00:27:18.185 Warning Temperature Time: 0 minutes 00:27:18.185 Critical Temperature Time: 0 minutes 00:27:18.185 00:27:18.185 Number of Queues 00:27:18.185 ================ 00:27:18.185 Number of I/O Submission Queues: 127 00:27:18.185 Number of I/O Completion Queues: 127 00:27:18.185 00:27:18.185 Active Namespaces 00:27:18.185 ================= 00:27:18.185 Namespace ID:1 00:27:18.185 Error Recovery Timeout: Unlimited 00:27:18.185 Command Set Identifier: NVM (00h) 00:27:18.185 Deallocate: Supported 00:27:18.185 Deallocated/Unwritten Error: Not Supported 00:27:18.185 Deallocated Read Value: Unknown 00:27:18.185 Deallocate in Write Zeroes: Not Supported 00:27:18.185 Deallocated Guard Field: 0xFFFF 00:27:18.185 Flush: Supported 00:27:18.185 Reservation: Supported 00:27:18.185 Namespace Sharing Capabilities: Multiple Controllers 00:27:18.185 Size (in LBAs): 131072 (0GiB) 00:27:18.185 Capacity (in LBAs): 131072 (0GiB) 00:27:18.185 Utilization (in LBAs): 131072 (0GiB) 00:27:18.185 NGUID: 0BCF3A9B28294462A3BADEC04459EB6E 00:27:18.185 UUID: 0bcf3a9b-2829-4462-a3ba-dec04459eb6e 00:27:18.185 Thin Provisioning: Not Supported 00:27:18.185 Per-NS Atomic Units: Yes 00:27:18.185 Atomic Boundary Size (Normal): 0 00:27:18.185 Atomic Boundary Size (PFail): 0 00:27:18.185 Atomic Boundary Offset: 0 00:27:18.185 Maximum Single Source Range Length: 65535 00:27:18.185 Maximum Copy Length: 65535 00:27:18.185 Maximum Source Range Count: 1 00:27:18.185 NGUID/EUI64 Never Reused: No 00:27:18.185 Namespace Write Protected: No 00:27:18.185 Number of LBA Formats: 1 00:27:18.185 Current LBA Format: LBA Format #00 00:27:18.185 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:18.185 00:27:18.185 11:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:27:18.185 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.445 [2024-06-10 11:35:47.204089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:23.732 Initializing NVMe Controllers 00:27:23.732 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:23.732 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:23.732 Initialization complete. Launching workers. 00:27:23.732 ======================================================== 00:27:23.732 Latency(us) 00:27:23.732 Device Information : IOPS MiB/s Average min max 00:27:23.732 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44162.50 172.51 2898.23 936.12 6150.94 00:27:23.732 ======================================================== 00:27:23.732 Total : 44162.50 172.51 2898.23 936.12 6150.94 00:27:23.732 00:27:23.732 [2024-06-10 11:35:52.308888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:23.732 11:35:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:27:23.732 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.732 [2024-06-10 11:35:52.508519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:29.018 Initializing NVMe Controllers 00:27:29.018 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:29.018 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:27:29.018 Initialization complete. Launching workers. 00:27:29.018 ======================================================== 00:27:29.018 Latency(us) 00:27:29.018 Device Information : IOPS MiB/s Average min max 00:27:29.019 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34484.20 134.70 3711.17 1198.32 8987.18 00:27:29.019 ======================================================== 00:27:29.019 Total : 34484.20 134.70 3711.17 1198.32 8987.18 00:27:29.019 00:27:29.019 [2024-06-10 11:35:57.528954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:29.019 11:35:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:27:29.019 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.019 [2024-06-10 11:35:57.748494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:34.305 [2024-06-10 11:36:02.892772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:34.305 Initializing NVMe Controllers 00:27:34.305 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:34.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:27:34.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:27:34.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:27:34.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:27:34.305 Initialization complete. Launching workers. 00:27:34.305 Starting thread on core 2 00:27:34.305 Starting thread on core 3 00:27:34.305 Starting thread on core 1 00:27:34.305 11:36:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:27:34.305 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.305 [2024-06-10 11:36:03.164252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:37.607 [2024-06-10 11:36:06.209886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:37.607 Initializing NVMe Controllers 00:27:37.607 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.607 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.607 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:27:37.607 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:27:37.607 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:27:37.607 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:27:37.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:27:37.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:27:37.607 Initialization complete. Launching workers. 00:27:37.607 Starting thread on core 1 with urgent priority queue 00:27:37.607 Starting thread on core 2 with urgent priority queue 00:27:37.607 Starting thread on core 3 with urgent priority queue 00:27:37.607 Starting thread on core 0 with urgent priority queue 00:27:37.607 SPDK bdev Controller (SPDK2 ) core 0: 11463.00 IO/s 8.72 secs/100000 ios 00:27:37.607 SPDK bdev Controller (SPDK2 ) core 1: 13188.67 IO/s 7.58 secs/100000 ios 00:27:37.607 SPDK bdev Controller (SPDK2 ) core 2: 13134.67 IO/s 7.61 secs/100000 ios 00:27:37.607 SPDK bdev Controller (SPDK2 ) core 3: 10846.33 IO/s 9.22 secs/100000 ios 00:27:37.607 ======================================================== 00:27:37.607 00:27:37.607 11:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:37.608 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.608 [2024-06-10 11:36:06.469648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:37.608 Initializing NVMe Controllers 00:27:37.608 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.608 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:37.608 Namespace ID: 1 size: 0GB 00:27:37.608 Initialization complete. 00:27:37.608 INFO: using host memory buffer for IO 00:27:37.608 Hello world! 00:27:37.608 [2024-06-10 11:36:06.476678] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:37.608 11:36:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:27:37.869 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.869 [2024-06-10 11:36:06.732934] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:39.255 Initializing NVMe Controllers 00:27:39.255 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:39.255 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:39.255 Initialization complete. Launching workers. 00:27:39.255 submit (in ns) avg, min, max = 7748.9, 3891.7, 4000935.0 00:27:39.255 complete (in ns) avg, min, max = 25353.8, 2387.5, 7989166.7 00:27:39.255 00:27:39.255 Submit histogram 00:27:39.255 ================ 00:27:39.255 Range in us Cumulative Count 00:27:39.255 3.867 - 3.893: 0.0066% ( 1) 00:27:39.255 3.893 - 3.920: 0.6578% ( 99) 00:27:39.255 3.920 - 3.947: 4.4070% ( 570) 00:27:39.255 3.947 - 3.973: 13.5631% ( 1392) 00:27:39.255 3.973 - 4.000: 24.5083% ( 1664) 00:27:39.255 4.000 - 4.027: 36.0850% ( 1760) 00:27:39.255 4.027 - 4.053: 47.5104% ( 1737) 00:27:39.255 4.053 - 4.080: 60.1789% ( 1926) 00:27:39.255 4.080 - 4.107: 74.3537% ( 2155) 00:27:39.255 4.107 - 4.133: 87.3841% ( 1981) 00:27:39.255 4.133 - 4.160: 94.9878% ( 1156) 00:27:39.255 4.160 - 4.187: 98.2175% ( 491) 00:27:39.255 4.187 - 4.213: 99.2370% ( 155) 00:27:39.255 4.213 - 4.240: 99.3883% ( 23) 00:27:39.255 4.240 - 4.267: 99.4343% ( 7) 00:27:39.255 4.267 - 4.293: 99.4541% ( 3) 00:27:39.255 4.293 - 4.320: 99.4672% ( 2) 00:27:39.255 4.667 - 4.693: 99.4738% ( 1) 00:27:39.255 4.960 - 4.987: 99.4804% ( 1) 00:27:39.255 4.987 - 5.013: 99.4869% ( 1) 00:27:39.255 5.120 - 5.147: 99.4935% ( 1) 00:27:39.255 5.200 - 5.227: 99.5001% ( 1) 00:27:39.255 5.227 - 5.253: 99.5067% ( 1) 00:27:39.255 5.307 - 5.333: 99.5133% ( 1) 00:27:39.255 5.493 - 5.520: 99.5198% ( 1) 00:27:39.255 5.627 - 5.653: 99.5264% ( 1) 00:27:39.255 5.760 - 5.787: 99.5330% ( 1) 00:27:39.255 5.787 - 5.813: 99.5396% ( 1) 00:27:39.255 6.080 - 6.107: 99.5461% ( 1) 00:27:39.255 6.213 - 6.240: 99.5527% ( 1) 00:27:39.255 6.400 - 6.427: 99.5593% ( 1) 00:27:39.255 6.533 - 6.560: 99.5659% ( 1) 00:27:39.255 6.613 - 6.640: 99.5725% ( 1) 00:27:39.255 6.773 - 6.800: 99.5790% ( 1) 00:27:39.255 6.880 - 6.933: 99.5922% ( 2) 00:27:39.255 6.933 - 6.987: 99.6185% ( 4) 00:27:39.255 6.987 - 7.040: 99.6317% ( 2) 00:27:39.255 7.093 - 7.147: 99.6382% ( 1) 00:27:39.255 7.147 - 7.200: 99.6448% ( 1) 00:27:39.255 7.200 - 7.253: 99.6580% ( 2) 00:27:39.255 7.253 - 7.307: 99.6645% ( 1) 00:27:39.255 7.360 - 7.413: 99.6843% ( 3) 00:27:39.255 7.413 - 7.467: 99.6909% ( 1) 00:27:39.255 7.520 - 7.573: 99.7040% ( 2) 00:27:39.255 7.573 - 7.627: 99.7172% ( 2) 00:27:39.255 7.627 - 7.680: 99.7435% ( 4) 00:27:39.255 7.680 - 7.733: 99.7500% ( 1) 00:27:39.255 7.787 - 7.840: 99.7698% ( 3) 00:27:39.255 7.840 - 7.893: 99.7764% ( 1) 00:27:39.255 7.893 - 7.947: 99.7829% ( 1) 00:27:39.255 7.947 - 8.000: 99.7895% ( 1) 00:27:39.255 8.000 - 8.053: 99.7961% ( 1) 00:27:39.255 8.160 - 8.213: 99.8158% ( 3) 00:27:39.255 8.320 - 8.373: 99.8224% ( 1) 00:27:39.255 8.480 - 8.533: 99.8290% ( 1) 00:27:39.255 8.587 - 8.640: 99.8356% ( 1) 00:27:39.255 8.640 - 8.693: 99.8421% ( 1) 00:27:39.255 8.693 - 8.747: 99.8487% ( 1) 00:27:39.255 8.747 - 8.800: 99.8553% ( 1) 00:27:39.255 9.067 - 9.120: 99.8619% ( 1) 00:27:39.255 9.333 - 9.387: 99.8684% ( 1) 00:27:39.255 9.760 - 9.813: 99.8750% ( 1) 00:27:39.255 10.133 - 10.187: 99.8816% ( 1) 00:27:39.255 10.293 - 10.347: 99.8882% ( 1) 00:27:39.255 12.480 - 12.533: 99.8948% ( 1) 00:27:39.255 13.653 - 13.760: 99.9013% ( 1) 00:27:39.255 15.680 - 15.787: 99.9079% ( 1) 00:27:39.255 3986.773 - 4014.080: 100.0000% ( 14) 00:27:39.255 00:27:39.255 Complete histogram 00:27:39.255 ================== 00:27:39.255 Range in us Cumulative Count 00:27:39.256 2.387 - 2.400: 1.0919% ( 166) 00:27:39.256 2.400 - 2.413: 1.3418% ( 38) 00:27:39.256 2.413 - 2.427: 1.9601% ( 94) 00:27:39.256 2.427 - 2.440: 34.1643% ( 4896) 00:27:39.256 2.440 - 2.453: 39.8474% ( 864) 00:27:39.256 2.453 - 2.467: 68.7627% ( 4396) 00:27:39.256 2.467 - 2.480: 77.6097% ( 1345) 00:27:39.256 2.480 - 2.493: 80.7275% ( 474) 00:27:39.256 2.493 - 2.507: 83.0889% ( 359) 00:27:39.256 2.507 - [2024-06-10 11:36:07.840384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:39.256 2.520: 87.5354% ( 676) 00:27:39.256 2.520 - 2.533: 91.9753% ( 675) 00:27:39.256 2.533 - 2.547: 95.4680% ( 531) 00:27:39.256 2.547 - 2.560: 97.9807% ( 382) 00:27:39.256 2.560 - 2.573: 98.7831% ( 122) 00:27:39.256 2.573 - 2.587: 99.0265% ( 37) 00:27:39.256 2.587 - 2.600: 99.0923% ( 10) 00:27:39.256 2.600 - 2.613: 99.1054% ( 2) 00:27:39.256 2.613 - 2.627: 99.1120% ( 1) 00:27:39.256 4.747 - 4.773: 99.1186% ( 1) 00:27:39.256 4.827 - 4.853: 99.1252% ( 1) 00:27:39.256 4.853 - 4.880: 99.1318% ( 1) 00:27:39.256 4.933 - 4.960: 99.1383% ( 1) 00:27:39.256 4.987 - 5.013: 99.1449% ( 1) 00:27:39.256 5.067 - 5.093: 99.1515% ( 1) 00:27:39.256 5.147 - 5.173: 99.1581% ( 1) 00:27:39.256 5.173 - 5.200: 99.1712% ( 2) 00:27:39.256 5.200 - 5.227: 99.1844% ( 2) 00:27:39.256 5.253 - 5.280: 99.1909% ( 1) 00:27:39.256 5.280 - 5.307: 99.1975% ( 1) 00:27:39.256 5.520 - 5.547: 99.2041% ( 1) 00:27:39.256 5.573 - 5.600: 99.2107% ( 1) 00:27:39.256 5.653 - 5.680: 99.2173% ( 1) 00:27:39.256 5.733 - 5.760: 99.2238% ( 1) 00:27:39.256 5.840 - 5.867: 99.2304% ( 1) 00:27:39.256 5.920 - 5.947: 99.2370% ( 1) 00:27:39.256 6.027 - 6.053: 99.2501% ( 2) 00:27:39.256 6.107 - 6.133: 99.2567% ( 1) 00:27:39.256 6.133 - 6.160: 99.2699% ( 2) 00:27:39.256 6.293 - 6.320: 99.2765% ( 1) 00:27:39.256 6.320 - 6.347: 99.2830% ( 1) 00:27:39.256 6.373 - 6.400: 99.2896% ( 1) 00:27:39.256 6.480 - 6.507: 99.3028% ( 2) 00:27:39.256 6.613 - 6.640: 99.3093% ( 1) 00:27:39.256 6.640 - 6.667: 99.3159% ( 1) 00:27:39.256 6.693 - 6.720: 99.3291% ( 2) 00:27:39.256 6.773 - 6.800: 99.3357% ( 1) 00:27:39.256 6.880 - 6.933: 99.3422% ( 1) 00:27:39.256 7.147 - 7.200: 99.3488% ( 1) 00:27:39.256 7.253 - 7.307: 99.3620% ( 2) 00:27:39.256 7.360 - 7.413: 99.3685% ( 1) 00:27:39.256 7.467 - 7.520: 99.3751% ( 1) 00:27:39.256 7.520 - 7.573: 99.3817% ( 1) 00:27:39.256 7.840 - 7.893: 99.3949% ( 2) 00:27:39.256 7.893 - 7.947: 99.4014% ( 1) 00:27:39.256 9.067 - 9.120: 99.4080% ( 1) 00:27:39.256 12.053 - 12.107: 99.4146% ( 1) 00:27:39.256 20.053 - 20.160: 99.4212% ( 1) 00:27:39.256 40.320 - 40.533: 99.4277% ( 1) 00:27:39.256 173.227 - 174.080: 99.4343% ( 1) 00:27:39.256 3986.773 - 4014.080: 99.9934% ( 85) 00:27:39.256 7973.547 - 8028.160: 100.0000% ( 1) 00:27:39.256 00:27:39.256 11:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:27:39.256 11:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:27:39.256 11:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:27:39.256 11:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:27:39.256 11:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:39.256 [ 00:27:39.256 { 00:27:39.256 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:39.256 "subtype": "Discovery", 00:27:39.256 "listen_addresses": [], 00:27:39.256 "allow_any_host": true, 00:27:39.256 "hosts": [] 00:27:39.256 }, 00:27:39.256 { 00:27:39.256 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:39.256 "subtype": "NVMe", 00:27:39.256 "listen_addresses": [ 00:27:39.256 { 00:27:39.256 "trtype": "VFIOUSER", 00:27:39.256 "adrfam": "IPv4", 00:27:39.256 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:39.256 "trsvcid": "0" 00:27:39.256 } 00:27:39.256 ], 00:27:39.256 "allow_any_host": true, 00:27:39.256 "hosts": [], 00:27:39.256 "serial_number": "SPDK1", 00:27:39.256 "model_number": "SPDK bdev Controller", 00:27:39.256 "max_namespaces": 32, 00:27:39.256 "min_cntlid": 1, 00:27:39.256 "max_cntlid": 65519, 00:27:39.256 "namespaces": [ 00:27:39.256 { 00:27:39.256 "nsid": 1, 00:27:39.256 "bdev_name": "Malloc1", 00:27:39.256 "name": "Malloc1", 00:27:39.256 "nguid": "2EBBC0E823C440D4A2E69A9FAF9B9D0F", 00:27:39.256 "uuid": "2ebbc0e8-23c4-40d4-a2e6-9a9faf9b9d0f" 00:27:39.256 }, 00:27:39.256 { 00:27:39.256 "nsid": 2, 00:27:39.256 "bdev_name": "Malloc3", 00:27:39.256 "name": "Malloc3", 00:27:39.256 "nguid": "C7E103723DDA42B887EEBCEBAD454775", 00:27:39.256 "uuid": "c7e10372-3dda-42b8-87ee-bcebad454775" 00:27:39.256 } 00:27:39.256 ] 00:27:39.256 }, 00:27:39.256 { 00:27:39.256 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:39.256 "subtype": "NVMe", 00:27:39.256 "listen_addresses": [ 00:27:39.256 { 00:27:39.256 "trtype": "VFIOUSER", 00:27:39.256 "adrfam": "IPv4", 00:27:39.256 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:39.256 "trsvcid": "0" 00:27:39.256 } 00:27:39.256 ], 00:27:39.256 "allow_any_host": true, 00:27:39.256 "hosts": [], 00:27:39.256 "serial_number": "SPDK2", 00:27:39.256 "model_number": "SPDK bdev Controller", 00:27:39.256 "max_namespaces": 32, 00:27:39.256 "min_cntlid": 1, 00:27:39.256 "max_cntlid": 65519, 00:27:39.256 "namespaces": [ 00:27:39.256 { 00:27:39.256 "nsid": 1, 00:27:39.256 "bdev_name": "Malloc2", 00:27:39.256 "name": "Malloc2", 00:27:39.256 "nguid": "0BCF3A9B28294462A3BADEC04459EB6E", 00:27:39.256 "uuid": "0bcf3a9b-2829-4462-a3ba-dec04459eb6e" 00:27:39.256 } 00:27:39.256 ] 00:27:39.256 } 00:27:39.256 ] 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2243761 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:27:39.256 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:27:39.256 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.517 [2024-06-10 11:36:08.269645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:27:39.517 Malloc4 00:27:39.517 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:27:39.517 [2024-06-10 11:36:08.487037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:27:39.777 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:27:39.777 Asynchronous Event Request test 00:27:39.777 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:27:39.777 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:27:39.777 Registering asynchronous event callbacks... 00:27:39.777 Starting namespace attribute notice tests for all controllers... 00:27:39.777 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:39.777 aer_cb - Changed Namespace 00:27:39.777 Cleaning up... 00:27:39.777 [ 00:27:39.777 { 00:27:39.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:39.778 "subtype": "Discovery", 00:27:39.778 "listen_addresses": [], 00:27:39.778 "allow_any_host": true, 00:27:39.778 "hosts": [] 00:27:39.778 }, 00:27:39.778 { 00:27:39.778 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:27:39.778 "subtype": "NVMe", 00:27:39.778 "listen_addresses": [ 00:27:39.778 { 00:27:39.778 "trtype": "VFIOUSER", 00:27:39.778 "adrfam": "IPv4", 00:27:39.778 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:27:39.778 "trsvcid": "0" 00:27:39.778 } 00:27:39.778 ], 00:27:39.778 "allow_any_host": true, 00:27:39.778 "hosts": [], 00:27:39.778 "serial_number": "SPDK1", 00:27:39.778 "model_number": "SPDK bdev Controller", 00:27:39.778 "max_namespaces": 32, 00:27:39.778 "min_cntlid": 1, 00:27:39.778 "max_cntlid": 65519, 00:27:39.778 "namespaces": [ 00:27:39.778 { 00:27:39.778 "nsid": 1, 00:27:39.778 "bdev_name": "Malloc1", 00:27:39.778 "name": "Malloc1", 00:27:39.778 "nguid": "2EBBC0E823C440D4A2E69A9FAF9B9D0F", 00:27:39.778 "uuid": "2ebbc0e8-23c4-40d4-a2e6-9a9faf9b9d0f" 00:27:39.778 }, 00:27:39.778 { 00:27:39.778 "nsid": 2, 00:27:39.778 "bdev_name": "Malloc3", 00:27:39.778 "name": "Malloc3", 00:27:39.778 "nguid": "C7E103723DDA42B887EEBCEBAD454775", 00:27:39.778 "uuid": "c7e10372-3dda-42b8-87ee-bcebad454775" 00:27:39.778 } 00:27:39.778 ] 00:27:39.778 }, 00:27:39.778 { 00:27:39.778 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:27:39.778 "subtype": "NVMe", 00:27:39.778 "listen_addresses": [ 00:27:39.778 { 00:27:39.778 "trtype": "VFIOUSER", 00:27:39.778 "adrfam": "IPv4", 00:27:39.778 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:27:39.778 "trsvcid": "0" 00:27:39.778 } 00:27:39.778 ], 00:27:39.778 "allow_any_host": true, 00:27:39.778 "hosts": [], 00:27:39.778 "serial_number": "SPDK2", 00:27:39.778 "model_number": "SPDK bdev Controller", 00:27:39.778 "max_namespaces": 32, 00:27:39.778 "min_cntlid": 1, 00:27:39.778 "max_cntlid": 65519, 00:27:39.778 "namespaces": [ 00:27:39.778 { 00:27:39.778 "nsid": 1, 00:27:39.778 "bdev_name": "Malloc2", 00:27:39.778 "name": "Malloc2", 00:27:39.778 "nguid": "0BCF3A9B28294462A3BADEC04459EB6E", 00:27:39.778 "uuid": "0bcf3a9b-2829-4462-a3ba-dec04459eb6e" 00:27:39.778 }, 00:27:39.778 { 00:27:39.778 "nsid": 2, 00:27:39.778 "bdev_name": "Malloc4", 00:27:39.778 "name": "Malloc4", 00:27:39.778 "nguid": "BF55EDA9F0D7400088157066BF85DEF3", 00:27:39.778 "uuid": "bf55eda9-f0d7-4000-8815-7066bf85def3" 00:27:39.778 } 00:27:39.778 ] 00:27:39.778 } 00:27:39.778 ] 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2243761 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2234553 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2234553 ']' 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2234553 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2234553 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2234553' 00:27:39.778 killing process with pid 2234553 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2234553 00:27:39.778 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2234553 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:27:40.056 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2244204 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2244204' 00:27:40.057 Process pid: 2244204 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2244204 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2244204 ']' 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:40.057 11:36:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:40.057 [2024-06-10 11:36:08.953189] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:40.057 [2024-06-10 11:36:08.954163] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:40.057 [2024-06-10 11:36:08.954205] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.057 [2024-06-10 11:36:09.014641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.378 [2024-06-10 11:36:09.080507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.378 [2024-06-10 11:36:09.080543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.378 [2024-06-10 11:36:09.080554] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.378 [2024-06-10 11:36:09.080561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.378 [2024-06-10 11:36:09.080567] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.378 [2024-06-10 11:36:09.080703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.378 [2024-06-10 11:36:09.080788] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.378 [2024-06-10 11:36:09.080935] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.378 [2024-06-10 11:36:09.080936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.378 [2024-06-10 11:36:09.147829] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:40.378 [2024-06-10 11:36:09.147915] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:40.378 [2024-06-10 11:36:09.148314] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:40.378 [2024-06-10 11:36:09.148957] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:40.378 [2024-06-10 11:36:09.148987] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:40.378 11:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:40.378 11:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:27:40.378 11:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:27:41.320 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:27:41.581 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:27:41.581 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:27:41.581 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:41.581 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:27:41.581 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:41.842 Malloc1 00:27:41.842 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:27:41.842 11:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:27:42.103 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:27:42.362 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:27:42.362 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:27:42.362 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:42.623 Malloc2 00:27:42.623 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:27:42.885 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:27:43.144 11:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:27:43.144 11:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:27:43.144 11:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2244204 00:27:43.144 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2244204 ']' 00:27:43.144 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2244204 00:27:43.145 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:27:43.145 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2244204 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2244204' 00:27:43.405 killing process with pid 2244204 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2244204 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2244204 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:27:43.405 00:27:43.405 real 0m50.640s 00:27:43.405 user 3m21.007s 00:27:43.405 sys 0m3.018s 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:43.405 11:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:27:43.405 ************************************ 00:27:43.405 END TEST nvmf_vfio_user 00:27:43.405 ************************************ 00:27:43.405 11:36:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:43.405 11:36:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:43.405 11:36:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:43.405 11:36:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.666 ************************************ 00:27:43.666 START TEST nvmf_vfio_user_nvme_compliance 00:27:43.666 ************************************ 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:27:43.666 * Looking for test storage... 00:27:43.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2245287 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2245287' 00:27:43.666 Process pid: 2245287 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2245287 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 2245287 ']' 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:43.666 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:43.666 [2024-06-10 11:36:12.588317] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:27:43.666 [2024-06-10 11:36:12.588380] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.666 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.926 [2024-06-10 11:36:12.655299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:43.926 [2024-06-10 11:36:12.728567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.926 [2024-06-10 11:36:12.728609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.927 [2024-06-10 11:36:12.728617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.927 [2024-06-10 11:36:12.728624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.927 [2024-06-10 11:36:12.728629] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.927 [2024-06-10 11:36:12.728687] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.927 [2024-06-10 11:36:12.728772] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.927 [2024-06-10 11:36:12.728776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.927 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:43.927 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:27:43.927 11:36:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.869 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:45.130 malloc0 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.130 11:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:27:45.130 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.130 00:27:45.130 00:27:45.130 CUnit - A unit testing framework for C - Version 2.1-3 00:27:45.130 http://cunit.sourceforge.net/ 00:27:45.130 00:27:45.130 00:27:45.130 Suite: nvme_compliance 00:27:45.130 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 11:36:14.070182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.130 [2024-06-10 11:36:14.071544] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:27:45.130 [2024-06-10 11:36:14.071559] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:27:45.130 [2024-06-10 11:36:14.071565] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:27:45.130 [2024-06-10 11:36:14.073198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.390 passed 00:27:45.390 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 11:36:14.166773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.390 [2024-06-10 11:36:14.169794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.390 passed 00:27:45.390 Test: admin_identify_ns ...[2024-06-10 11:36:14.265925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.391 [2024-06-10 11:36:14.325683] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:27:45.391 [2024-06-10 11:36:14.333677] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:27:45.391 [2024-06-10 11:36:14.354786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.651 passed 00:27:45.651 Test: admin_get_features_mandatory_features ...[2024-06-10 11:36:14.448805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.651 [2024-06-10 11:36:14.451822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.651 passed 00:27:45.651 Test: admin_get_features_optional_features ...[2024-06-10 11:36:14.545374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.651 [2024-06-10 11:36:14.549396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.651 passed 00:27:45.911 Test: admin_set_features_number_of_queues ...[2024-06-10 11:36:14.641507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.911 [2024-06-10 11:36:14.745786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:45.911 passed 00:27:45.911 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 11:36:14.839413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:45.911 [2024-06-10 11:36:14.842430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.172 passed 00:27:46.172 Test: admin_get_log_page_with_lpo ...[2024-06-10 11:36:14.935539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.173 [2024-06-10 11:36:15.002680] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:27:46.173 [2024-06-10 11:36:15.015748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.173 passed 00:27:46.173 Test: fabric_property_get ...[2024-06-10 11:36:15.109833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.173 [2024-06-10 11:36:15.111096] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:27:46.173 [2024-06-10 11:36:15.112864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.433 passed 00:27:46.433 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 11:36:15.206421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.433 [2024-06-10 11:36:15.207657] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:27:46.433 [2024-06-10 11:36:15.209447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.433 passed 00:27:46.433 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 11:36:15.302571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.434 [2024-06-10 11:36:15.385683] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:46.434 [2024-06-10 11:36:15.401676] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:46.694 [2024-06-10 11:36:15.406768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.694 passed 00:27:46.694 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 11:36:15.499370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.694 [2024-06-10 11:36:15.500596] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:27:46.694 [2024-06-10 11:36:15.502386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.694 passed 00:27:46.694 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 11:36:15.595504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.955 [2024-06-10 11:36:15.669678] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:46.955 [2024-06-10 11:36:15.693675] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:27:46.955 [2024-06-10 11:36:15.698766] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.955 passed 00:27:46.955 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 11:36:15.793826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:46.955 [2024-06-10 11:36:15.795059] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:27:46.955 [2024-06-10 11:36:15.795083] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:27:46.955 [2024-06-10 11:36:15.796848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:46.955 passed 00:27:46.955 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 11:36:15.889937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:47.216 [2024-06-10 11:36:15.981677] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:27:47.216 [2024-06-10 11:36:15.989675] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:27:47.216 [2024-06-10 11:36:15.997676] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:27:47.216 [2024-06-10 11:36:16.005683] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:27:47.216 [2024-06-10 11:36:16.034763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:47.216 passed 00:27:47.216 Test: admin_create_io_sq_verify_pc ...[2024-06-10 11:36:16.128365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:47.216 [2024-06-10 11:36:16.143683] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:27:47.216 [2024-06-10 11:36:16.161506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:47.477 passed 00:27:47.477 Test: admin_create_io_qp_max_qps ...[2024-06-10 11:36:16.255077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:48.419 [2024-06-10 11:36:17.348680] nvme_ctrlr.c:5384:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:27:48.991 [2024-06-10 11:36:17.742898] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:48.991 passed 00:27:48.991 Test: admin_create_io_sq_shared_cq ...[2024-06-10 11:36:17.835128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:27:49.252 [2024-06-10 11:36:17.966680] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:27:49.252 [2024-06-10 11:36:18.003735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:27:49.252 passed 00:27:49.252 00:27:49.252 Run Summary: Type Total Ran Passed Failed Inactive 00:27:49.252 suites 1 1 n/a 0 0 00:27:49.252 tests 18 18 18 0 0 00:27:49.252 asserts 360 360 360 0 n/a 00:27:49.252 00:27:49.252 Elapsed time = 1.648 seconds 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2245287 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 2245287 ']' 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 2245287 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2245287 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2245287' 00:27:49.252 killing process with pid 2245287 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 2245287 00:27:49.252 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 2245287 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:49.514 00:27:49.514 real 0m5.858s 00:27:49.514 user 0m16.584s 00:27:49.514 sys 0m0.466s 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:27:49.514 ************************************ 00:27:49.514 END TEST nvmf_vfio_user_nvme_compliance 00:27:49.514 ************************************ 00:27:49.514 11:36:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:49.514 11:36:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:49.514 11:36:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:49.514 11:36:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.514 ************************************ 00:27:49.514 START TEST nvmf_vfio_user_fuzz 00:27:49.514 ************************************ 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:27:49.514 * Looking for test storage... 00:27:49.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:49.514 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2246382 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2246382' 00:27:49.515 Process pid: 2246382 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2246382 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2246382 ']' 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:49.515 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:49.776 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:49.776 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:27:49.776 11:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:51.162 malloc0 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:27:51.162 11:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:28:23.273 Fuzzing completed. Shutting down the fuzz application 00:28:23.273 00:28:23.273 Dumping successful admin opcodes: 00:28:23.273 8, 9, 10, 24, 00:28:23.273 Dumping successful io opcodes: 00:28:23.273 0, 00:28:23.273 NS: 0x200003a1ef00 I/O qp, Total commands completed: 989356, total successful commands: 3876, random_seed: 941200704 00:28:23.273 NS: 0x200003a1ef00 admin qp, Total commands completed: 246450, total successful commands: 1984, random_seed: 512800192 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2246382 ']' 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2246382' 00:28:23.273 killing process with pid 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 2246382 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:28:23.273 00:28:23.273 real 0m32.148s 00:28:23.273 user 0m35.880s 00:28:23.273 sys 0m25.189s 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:23.273 11:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:23.273 ************************************ 00:28:23.273 END TEST nvmf_vfio_user_fuzz 00:28:23.273 ************************************ 00:28:23.273 11:36:50 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:23.273 11:36:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:23.273 11:36:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:23.273 11:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:23.273 ************************************ 00:28:23.273 START TEST nvmf_host_management 00:28:23.273 ************************************ 00:28:23.273 11:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:28:23.273 * Looking for test storage... 00:28:23.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.274 11:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:28:23.275 11:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.871 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:29.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:29.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:29.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:29.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:28:29.872 00:28:29.872 --- 10.0.0.2 ping statistics --- 00:28:29.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.872 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:29.872 00:28:29.872 --- 10.0.0.1 ping statistics --- 00:28:29.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.872 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2256402 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2256402 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2256402 ']' 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:29.872 11:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:29.872 [2024-06-10 11:36:58.004987] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:29.872 [2024-06-10 11:36:58.005040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.872 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.873 [2024-06-10 11:36:58.072089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.873 [2024-06-10 11:36:58.138761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.873 [2024-06-10 11:36:58.138793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.873 [2024-06-10 11:36:58.138801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.873 [2024-06-10 11:36:58.138807] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.873 [2024-06-10 11:36:58.138813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.873 [2024-06-10 11:36:58.138924] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.873 [2024-06-10 11:36:58.139084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.873 [2024-06-10 11:36:58.139243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.873 [2024-06-10 11:36:58.139244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.873 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:29.873 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:28:29.873 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.873 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:29.873 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 [2024-06-10 11:36:58.872440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 Malloc0 00:28:30.134 [2024-06-10 11:36:58.933401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2256704 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2256704 /var/tmp/bdevperf.sock 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2256704 ']' 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:30.134 { 00:28:30.134 "params": { 00:28:30.134 "name": "Nvme$subsystem", 00:28:30.134 "trtype": "$TEST_TRANSPORT", 00:28:30.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.134 "adrfam": "ipv4", 00:28:30.134 "trsvcid": "$NVMF_PORT", 00:28:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.134 "hdgst": ${hdgst:-false}, 00:28:30.134 "ddgst": ${ddgst:-false} 00:28:30.134 }, 00:28:30.134 "method": "bdev_nvme_attach_controller" 00:28:30.134 } 00:28:30.134 EOF 00:28:30.134 )") 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:28:30.134 11:36:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:30.134 "params": { 00:28:30.134 "name": "Nvme0", 00:28:30.134 "trtype": "tcp", 00:28:30.134 "traddr": "10.0.0.2", 00:28:30.134 "adrfam": "ipv4", 00:28:30.134 "trsvcid": "4420", 00:28:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:30.134 "hdgst": false, 00:28:30.134 "ddgst": false 00:28:30.134 }, 00:28:30.134 "method": "bdev_nvme_attach_controller" 00:28:30.134 }' 00:28:30.134 [2024-06-10 11:36:59.040522] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:30.134 [2024-06-10 11:36:59.040572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256704 ] 00:28:30.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.134 [2024-06-10 11:36:59.099122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.395 [2024-06-10 11:36:59.163788] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.395 Running I/O for 10 seconds... 00:28:30.395 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:30.395 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:28:30.395 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:30.395 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.395 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:30.655 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.957 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.957 [2024-06-10 11:36:59.732016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915180 is same with the state(5) to be set 00:28:30.957 [2024-06-10 11:36:59.732231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.957 [2024-06-10 11:36:59.732498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.957 [2024-06-10 11:36:59.732506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.732988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.732996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.958 [2024-06-10 11:36:59.733218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.958 [2024-06-10 11:36:59.733228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.959 [2024-06-10 11:36:59.733405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:30.959 [2024-06-10 11:36:59.733459] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8c4b0 was disconnected and freed. reset controller. 00:28:30.959 [2024-06-10 11:36:59.734656] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:30.959 task offset: 75904 on job bdev=Nvme0n1 fails 00:28:30.959 00:28:30.959 Latency(us) 00:28:30.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:30.959 Job: Nvme0n1 ended in about 0.43 seconds with error 00:28:30.959 Verification LBA range: start 0x0 length 0x400 00:28:30.959 Nvme0n1 : 0.43 1351.14 84.45 150.13 0.00 41374.68 1679.36 36481.71 00:28:30.959 =================================================================================================================== 00:28:30.959 Total : 1351.14 84.45 150.13 0.00 41374.68 1679.36 36481.71 00:28:30.959 [2024-06-10 11:36:59.736655] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:30.959 [2024-06-10 11:36:59.736683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1653510 (9): Bad file descriptor 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.959 11:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:30.959 [2024-06-10 11:36:59.758187] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2256704 00:28:31.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2256704) - No such process 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:31.957 { 00:28:31.957 "params": { 00:28:31.957 "name": "Nvme$subsystem", 00:28:31.957 "trtype": "$TEST_TRANSPORT", 00:28:31.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.957 "adrfam": "ipv4", 00:28:31.957 "trsvcid": "$NVMF_PORT", 00:28:31.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.957 "hdgst": ${hdgst:-false}, 00:28:31.957 "ddgst": ${ddgst:-false} 00:28:31.957 }, 00:28:31.957 "method": "bdev_nvme_attach_controller" 00:28:31.957 } 00:28:31.957 EOF 00:28:31.957 )") 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:28:31.957 11:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:31.957 "params": { 00:28:31.957 "name": "Nvme0", 00:28:31.957 "trtype": "tcp", 00:28:31.957 "traddr": "10.0.0.2", 00:28:31.957 "adrfam": "ipv4", 00:28:31.957 "trsvcid": "4420", 00:28:31.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.957 "hdgst": false, 00:28:31.957 "ddgst": false 00:28:31.957 }, 00:28:31.957 "method": "bdev_nvme_attach_controller" 00:28:31.957 }' 00:28:31.957 [2024-06-10 11:37:00.806428] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:31.957 [2024-06-10 11:37:00.806483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257062 ] 00:28:31.957 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.957 [2024-06-10 11:37:00.865227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.217 [2024-06-10 11:37:00.929170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.217 Running I/O for 1 seconds... 00:28:33.601 00:28:33.601 Latency(us) 00:28:33.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.601 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:33.601 Verification LBA range: start 0x0 length 0x400 00:28:33.601 Nvme0n1 : 1.01 1393.43 87.09 0.00 0.00 45161.76 10704.21 36700.16 00:28:33.601 =================================================================================================================== 00:28:33.601 Total : 1393.43 87.09 0.00 0.00 45161.76 10704.21 36700.16 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.602 rmmod nvme_tcp 00:28:33.602 rmmod nvme_fabrics 00:28:33.602 rmmod nvme_keyring 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2256402 ']' 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2256402 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2256402 ']' 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2256402 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2256402 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2256402' 00:28:33.602 killing process with pid 2256402 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2256402 00:28:33.602 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2256402 00:28:33.602 [2024-06-10 11:37:02.548299] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.862 11:37:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.776 11:37:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.776 11:37:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:35.776 00:28:35.776 real 0m14.103s 00:28:35.776 user 0m22.055s 00:28:35.776 sys 0m6.334s 00:28:35.776 11:37:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:35.776 11:37:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:35.776 ************************************ 00:28:35.776 END TEST nvmf_host_management 00:28:35.776 ************************************ 00:28:35.776 11:37:04 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:35.776 11:37:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:35.776 11:37:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:35.776 11:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.776 ************************************ 00:28:35.776 START TEST nvmf_lvol 00:28:35.776 ************************************ 00:28:35.776 11:37:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:28:36.038 * Looking for test storage... 00:28:36.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:28:36.038 11:37:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:42.623 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.623 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.623 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:42.624 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:42.624 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:42.624 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:42.624 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.624 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:42.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:28:42.886 00:28:42.886 --- 10.0.0.2 ping statistics --- 00:28:42.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.886 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:28:42.886 00:28:42.886 --- 10.0.0.1 ping statistics --- 00:28:42.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.886 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2261483 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2261483 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:28:42.886 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2261483 ']' 00:28:42.887 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.887 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:42.887 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.887 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:42.887 11:37:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:42.887 [2024-06-10 11:37:11.812012] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:28:42.887 [2024-06-10 11:37:11.812075] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.887 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.147 [2024-06-10 11:37:11.885954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.147 [2024-06-10 11:37:11.961161] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.148 [2024-06-10 11:37:11.961202] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.148 [2024-06-10 11:37:11.961209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.148 [2024-06-10 11:37:11.961216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.148 [2024-06-10 11:37:11.961222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.148 [2024-06-10 11:37:11.961335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.148 [2024-06-10 11:37:11.961455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.148 [2024-06-10 11:37:11.961458] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.717 11:37:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.977 [2024-06-10 11:37:12.870221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.977 11:37:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.237 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:44.237 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:44.497 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:44.497 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:44.758 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:45.019 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=20e98fa4-318e-41f9-b452-648390b7db65 00:28:45.019 11:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 20e98fa4-318e-41f9-b452-648390b7db65 lvol 20 00:28:45.279 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6776d94d-372a-4001-92f1-09468b93ab89 00:28:45.279 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:45.279 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6776d94d-372a-4001-92f1-09468b93ab89 00:28:45.540 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:45.800 [2024-06-10 11:37:14.634721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.800 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.060 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2262104 00:28:46.060 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:46.060 11:37:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:46.060 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.000 11:37:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6776d94d-372a-4001-92f1-09468b93ab89 MY_SNAPSHOT 00:28:47.261 11:37:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d6ed169c-e914-41c4-b343-dd1f8baf308c 00:28:47.261 11:37:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6776d94d-372a-4001-92f1-09468b93ab89 30 00:28:47.520 11:37:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d6ed169c-e914-41c4-b343-dd1f8baf308c MY_CLONE 00:28:47.780 11:37:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eae5196b-b42d-4443-a787-3037331e89dd 00:28:47.781 11:37:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eae5196b-b42d-4443-a787-3037331e89dd 00:28:48.352 11:37:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2262104 00:28:56.496 Initializing NVMe Controllers 00:28:56.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:56.496 Controller IO queue size 128, less than required. 00:28:56.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:56.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:56.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:56.496 Initialization complete. Launching workers. 00:28:56.496 ======================================================== 00:28:56.496 Latency(us) 00:28:56.496 Device Information : IOPS MiB/s Average min max 00:28:56.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12457.70 48.66 10276.90 1502.02 59903.64 00:28:56.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11843.20 46.26 10810.44 1207.61 73795.87 00:28:56.496 ======================================================== 00:28:56.496 Total : 24300.90 94.93 10536.92 1207.61 73795.87 00:28:56.496 00:28:56.496 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:56.758 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6776d94d-372a-4001-92f1-09468b93ab89 00:28:56.758 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20e98fa4-318e-41f9-b452-648390b7db65 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.020 rmmod nvme_tcp 00:28:57.020 rmmod nvme_fabrics 00:28:57.020 rmmod nvme_keyring 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2261483 ']' 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2261483 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2261483 ']' 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2261483 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:57.020 11:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2261483 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2261483' 00:28:57.281 killing process with pid 2261483 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2261483 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2261483 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.281 11:37:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.830 00:28:59.830 real 0m23.532s 00:28:59.830 user 1m5.833s 00:28:59.830 sys 0m7.840s 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.830 ************************************ 00:28:59.830 END TEST nvmf_lvol 00:28:59.830 ************************************ 00:28:59.830 11:37:28 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:59.830 11:37:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:59.830 11:37:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:59.830 11:37:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.830 ************************************ 00:28:59.830 START TEST nvmf_lvs_grow 00:28:59.830 ************************************ 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:28:59.830 * Looking for test storage... 00:28:59.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.830 11:37:28 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.831 11:37:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:06.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:06.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:06.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:06.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.486 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.487 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.487 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:06.487 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.487 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.748 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.748 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:06.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.779 ms 00:29:06.748 00:29:06.748 --- 10.0.0.2 ping statistics --- 00:29:06.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.748 rtt min/avg/max/mdev = 0.779/0.779/0.779/0.000 ms 00:29:06.748 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:06.748 00:29:06.748 --- 10.0.0.1 ping statistics --- 00:29:06.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.748 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:06.748 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2268590 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2268590 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2268590 ']' 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:06.749 11:37:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:06.749 [2024-06-10 11:37:35.598344] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:06.749 [2024-06-10 11:37:35.598412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.749 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.749 [2024-06-10 11:37:35.668808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.011 [2024-06-10 11:37:35.741918] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.011 [2024-06-10 11:37:35.741961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.011 [2024-06-10 11:37:35.741973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.011 [2024-06-10 11:37:35.741979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.011 [2024-06-10 11:37:35.741985] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.011 [2024-06-10 11:37:35.742002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.584 11:37:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:07.846 [2024-06-10 11:37:36.685858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 ************************************ 00:29:07.846 START TEST lvs_grow_clean 00:29:07.846 ************************************ 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:07.846 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:08.108 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:08.108 11:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4679ccbe-82fe-4415-b7fa-118abade0485 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:08.369 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4679ccbe-82fe-4415-b7fa-118abade0485 lvol 150 00:29:08.630 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d025c188-1b9b-4c29-a912-761b4dd4e17d 00:29:08.630 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:08.630 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:08.892 [2024-06-10 11:37:37.727612] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:08.893 [2024-06-10 11:37:37.727663] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:08.893 true 00:29:08.893 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:08.893 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:09.154 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:09.154 11:37:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:09.416 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d025c188-1b9b-4c29-a912-761b4dd4e17d 00:29:09.416 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:09.677 [2024-06-10 11:37:38.534027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.677 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2269201 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2269201 /var/tmp/bdevperf.sock 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2269201 ']' 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:09.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:09.939 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.939 [2024-06-10 11:37:38.798729] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:09.939 [2024-06-10 11:37:38.798779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269201 ] 00:29:09.939 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.939 [2024-06-10 11:37:38.856582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.200 [2024-06-10 11:37:38.920809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.200 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:10.200 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:29:10.200 11:37:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:10.462 Nvme0n1 00:29:10.462 11:37:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:10.724 [ 00:29:10.724 { 00:29:10.724 "name": "Nvme0n1", 00:29:10.724 "aliases": [ 00:29:10.724 "d025c188-1b9b-4c29-a912-761b4dd4e17d" 00:29:10.724 ], 00:29:10.724 "product_name": "NVMe disk", 00:29:10.724 "block_size": 4096, 00:29:10.724 "num_blocks": 38912, 00:29:10.724 "uuid": "d025c188-1b9b-4c29-a912-761b4dd4e17d", 00:29:10.724 "assigned_rate_limits": { 00:29:10.724 "rw_ios_per_sec": 0, 00:29:10.724 "rw_mbytes_per_sec": 0, 00:29:10.724 "r_mbytes_per_sec": 0, 00:29:10.724 "w_mbytes_per_sec": 0 00:29:10.724 }, 00:29:10.724 "claimed": false, 00:29:10.724 "zoned": false, 00:29:10.724 "supported_io_types": { 00:29:10.724 "read": true, 00:29:10.724 "write": true, 00:29:10.724 "unmap": true, 00:29:10.724 "write_zeroes": true, 00:29:10.724 "flush": true, 00:29:10.724 "reset": true, 00:29:10.724 "compare": true, 00:29:10.724 "compare_and_write": true, 00:29:10.724 "abort": true, 00:29:10.724 "nvme_admin": true, 00:29:10.724 "nvme_io": true 00:29:10.724 }, 00:29:10.724 "memory_domains": [ 00:29:10.724 { 00:29:10.724 "dma_device_id": "system", 00:29:10.724 "dma_device_type": 1 00:29:10.724 } 00:29:10.724 ], 00:29:10.724 "driver_specific": { 00:29:10.724 "nvme": [ 00:29:10.724 { 00:29:10.724 "trid": { 00:29:10.724 "trtype": "TCP", 00:29:10.724 "adrfam": "IPv4", 00:29:10.724 "traddr": "10.0.0.2", 00:29:10.724 "trsvcid": "4420", 00:29:10.724 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:10.724 }, 00:29:10.724 "ctrlr_data": { 00:29:10.724 "cntlid": 1, 00:29:10.724 "vendor_id": "0x8086", 00:29:10.724 "model_number": "SPDK bdev Controller", 00:29:10.724 "serial_number": "SPDK0", 00:29:10.724 "firmware_revision": "24.09", 00:29:10.724 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.724 "oacs": { 00:29:10.724 "security": 0, 00:29:10.724 "format": 0, 00:29:10.724 "firmware": 0, 00:29:10.724 "ns_manage": 0 00:29:10.724 }, 00:29:10.724 "multi_ctrlr": true, 00:29:10.724 "ana_reporting": false 00:29:10.724 }, 00:29:10.724 "vs": { 00:29:10.724 "nvme_version": "1.3" 00:29:10.724 }, 00:29:10.724 "ns_data": { 00:29:10.724 "id": 1, 00:29:10.724 "can_share": true 00:29:10.724 } 00:29:10.724 } 00:29:10.724 ], 00:29:10.724 "mp_policy": "active_passive" 00:29:10.724 } 00:29:10.724 } 00:29:10.724 ] 00:29:10.724 11:37:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2269501 00:29:10.724 11:37:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:10.724 11:37:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:10.986 Running I/O for 10 seconds... 00:29:11.930 Latency(us) 00:29:11.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.930 Nvme0n1 : 1.00 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:29:11.930 =================================================================================================================== 00:29:11.930 Total : 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:29:11.930 00:29:12.876 11:37:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:12.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.876 Nvme0n1 : 2.00 18146.50 70.88 0.00 0.00 0.00 0.00 0.00 00:29:12.876 =================================================================================================================== 00:29:12.876 Total : 18146.50 70.88 0.00 0.00 0.00 0.00 0.00 00:29:12.876 00:29:12.876 true 00:29:12.876 11:37:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:12.876 11:37:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:13.137 11:37:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:13.137 11:37:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:13.137 11:37:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2269501 00:29:14.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.079 Nvme0n1 : 3.00 18190.33 71.06 0.00 0.00 0.00 0.00 0.00 00:29:14.079 =================================================================================================================== 00:29:14.079 Total : 18190.33 71.06 0.00 0.00 0.00 0.00 0.00 00:29:14.079 00:29:15.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.022 Nvme0n1 : 4.00 18233.50 71.22 0.00 0.00 0.00 0.00 0.00 00:29:15.022 =================================================================================================================== 00:29:15.022 Total : 18233.50 71.22 0.00 0.00 0.00 0.00 0.00 00:29:15.022 00:29:16.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.018 Nvme0n1 : 5.00 18259.60 71.33 0.00 0.00 0.00 0.00 0.00 00:29:16.018 =================================================================================================================== 00:29:16.018 Total : 18259.60 71.33 0.00 0.00 0.00 0.00 0.00 00:29:16.018 00:29:16.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.963 Nvme0n1 : 6.00 18277.50 71.40 0.00 0.00 0.00 0.00 0.00 00:29:16.963 =================================================================================================================== 00:29:16.963 Total : 18277.50 71.40 0.00 0.00 0.00 0.00 0.00 00:29:16.963 00:29:17.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.908 Nvme0n1 : 7.00 18292.00 71.45 0.00 0.00 0.00 0.00 0.00 00:29:17.908 =================================================================================================================== 00:29:17.908 Total : 18292.00 71.45 0.00 0.00 0.00 0.00 0.00 00:29:17.908 00:29:18.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.851 Nvme0n1 : 8.00 18307.12 71.51 0.00 0.00 0.00 0.00 0.00 00:29:18.851 =================================================================================================================== 00:29:18.851 Total : 18307.12 71.51 0.00 0.00 0.00 0.00 0.00 00:29:18.851 00:29:19.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.795 Nvme0n1 : 9.00 18320.44 71.56 0.00 0.00 0.00 0.00 0.00 00:29:19.795 =================================================================================================================== 00:29:19.795 Total : 18320.44 71.56 0.00 0.00 0.00 0.00 0.00 00:29:19.795 00:29:21.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.179 Nvme0n1 : 10.00 18324.80 71.58 0.00 0.00 0.00 0.00 0.00 00:29:21.179 =================================================================================================================== 00:29:21.179 Total : 18324.80 71.58 0.00 0.00 0.00 0.00 0.00 00:29:21.179 00:29:21.179 00:29:21.179 Latency(us) 00:29:21.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.179 Nvme0n1 : 10.01 18326.88 71.59 0.00 0.00 6980.73 4369.07 12342.61 00:29:21.179 =================================================================================================================== 00:29:21.179 Total : 18326.88 71.59 0.00 0.00 6980.73 4369.07 12342.61 00:29:21.179 0 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2269201 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2269201 ']' 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2269201 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2269201 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2269201' 00:29:21.179 killing process with pid 2269201 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2269201 00:29:21.179 Received shutdown signal, test time was about 10.000000 seconds 00:29:21.179 00:29:21.179 Latency(us) 00:29:21.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.179 =================================================================================================================== 00:29:21.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2269201 00:29:21.179 11:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.179 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.441 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:21.441 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:21.701 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:21.702 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:21.702 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:21.963 [2024-06-10 11:37:50.750404] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:21.963 11:37:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:22.224 request: 00:29:22.224 { 00:29:22.224 "uuid": "4679ccbe-82fe-4415-b7fa-118abade0485", 00:29:22.224 "method": "bdev_lvol_get_lvstores", 00:29:22.224 "req_id": 1 00:29:22.224 } 00:29:22.224 Got JSON-RPC error response 00:29:22.224 response: 00:29:22.224 { 00:29:22.224 "code": -19, 00:29:22.224 "message": "No such device" 00:29:22.224 } 00:29:22.224 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:29:22.224 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:22.224 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:22.224 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:22.224 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:22.485 aio_bdev 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d025c188-1b9b-4c29-a912-761b4dd4e17d 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=d025c188-1b9b-4c29-a912-761b4dd4e17d 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:22.485 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d025c188-1b9b-4c29-a912-761b4dd4e17d -t 2000 00:29:22.745 [ 00:29:22.745 { 00:29:22.745 "name": "d025c188-1b9b-4c29-a912-761b4dd4e17d", 00:29:22.745 "aliases": [ 00:29:22.745 "lvs/lvol" 00:29:22.745 ], 00:29:22.745 "product_name": "Logical Volume", 00:29:22.746 "block_size": 4096, 00:29:22.746 "num_blocks": 38912, 00:29:22.746 "uuid": "d025c188-1b9b-4c29-a912-761b4dd4e17d", 00:29:22.746 "assigned_rate_limits": { 00:29:22.746 "rw_ios_per_sec": 0, 00:29:22.746 "rw_mbytes_per_sec": 0, 00:29:22.746 "r_mbytes_per_sec": 0, 00:29:22.746 "w_mbytes_per_sec": 0 00:29:22.746 }, 00:29:22.746 "claimed": false, 00:29:22.746 "zoned": false, 00:29:22.746 "supported_io_types": { 00:29:22.746 "read": true, 00:29:22.746 "write": true, 00:29:22.746 "unmap": true, 00:29:22.746 "write_zeroes": true, 00:29:22.746 "flush": false, 00:29:22.746 "reset": true, 00:29:22.746 "compare": false, 00:29:22.746 "compare_and_write": false, 00:29:22.746 "abort": false, 00:29:22.746 "nvme_admin": false, 00:29:22.746 "nvme_io": false 00:29:22.746 }, 00:29:22.746 "driver_specific": { 00:29:22.746 "lvol": { 00:29:22.746 "lvol_store_uuid": "4679ccbe-82fe-4415-b7fa-118abade0485", 00:29:22.746 "base_bdev": "aio_bdev", 00:29:22.746 "thin_provision": false, 00:29:22.746 "num_allocated_clusters": 38, 00:29:22.746 "snapshot": false, 00:29:22.746 "clone": false, 00:29:22.746 "esnap_clone": false 00:29:22.746 } 00:29:22.746 } 00:29:22.746 } 00:29:22.746 ] 00:29:22.746 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:29:22.746 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:22.746 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:23.006 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:23.006 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:23.006 11:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:23.266 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:23.266 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d025c188-1b9b-4c29-a912-761b4dd4e17d 00:29:23.266 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4679ccbe-82fe-4415-b7fa-118abade0485 00:29:23.527 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.788 00:29:23.788 real 0m15.962s 00:29:23.788 user 0m15.746s 00:29:23.788 sys 0m1.327s 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.788 ************************************ 00:29:23.788 END TEST lvs_grow_clean 00:29:23.788 ************************************ 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:23.788 11:37:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.048 ************************************ 00:29:24.048 START TEST lvs_grow_dirty 00:29:24.048 ************************************ 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.048 11:37:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:24.309 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:24.309 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:24.309 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:24.309 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:24.309 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:24.568 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:24.568 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:24.568 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f88fba80-34ae-4672-8cde-2bbb57738bc9 lvol 150 00:29:24.828 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:24.828 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:24.828 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:25.088 [2024-06-10 11:37:53.823183] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:25.088 [2024-06-10 11:37:53.823233] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:25.088 true 00:29:25.088 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:25.088 11:37:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:25.088 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:25.088 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:25.346 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:25.621 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.894 [2024-06-10 11:37:54.605495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2272573 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2272573 /var/tmp/bdevperf.sock 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2272573 ']' 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:25.894 11:37:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:26.154 [2024-06-10 11:37:54.878795] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:26.155 [2024-06-10 11:37:54.878863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272573 ] 00:29:26.155 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.155 [2024-06-10 11:37:54.936824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.155 [2024-06-10 11:37:55.001021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.155 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:26.155 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:29:26.155 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:26.416 Nvme0n1 00:29:26.416 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:26.677 [ 00:29:26.677 { 00:29:26.677 "name": "Nvme0n1", 00:29:26.677 "aliases": [ 00:29:26.677 "fedb7461-150b-436f-8483-7c04b0e65aa2" 00:29:26.677 ], 00:29:26.677 "product_name": "NVMe disk", 00:29:26.677 "block_size": 4096, 00:29:26.677 "num_blocks": 38912, 00:29:26.677 "uuid": "fedb7461-150b-436f-8483-7c04b0e65aa2", 00:29:26.677 "assigned_rate_limits": { 00:29:26.677 "rw_ios_per_sec": 0, 00:29:26.677 "rw_mbytes_per_sec": 0, 00:29:26.677 "r_mbytes_per_sec": 0, 00:29:26.677 "w_mbytes_per_sec": 0 00:29:26.677 }, 00:29:26.677 "claimed": false, 00:29:26.677 "zoned": false, 00:29:26.677 "supported_io_types": { 00:29:26.677 "read": true, 00:29:26.677 "write": true, 00:29:26.677 "unmap": true, 00:29:26.677 "write_zeroes": true, 00:29:26.677 "flush": true, 00:29:26.677 "reset": true, 00:29:26.677 "compare": true, 00:29:26.677 "compare_and_write": true, 00:29:26.677 "abort": true, 00:29:26.677 "nvme_admin": true, 00:29:26.677 "nvme_io": true 00:29:26.677 }, 00:29:26.677 "memory_domains": [ 00:29:26.677 { 00:29:26.677 "dma_device_id": "system", 00:29:26.677 "dma_device_type": 1 00:29:26.677 } 00:29:26.677 ], 00:29:26.677 "driver_specific": { 00:29:26.677 "nvme": [ 00:29:26.677 { 00:29:26.677 "trid": { 00:29:26.677 "trtype": "TCP", 00:29:26.677 "adrfam": "IPv4", 00:29:26.677 "traddr": "10.0.0.2", 00:29:26.677 "trsvcid": "4420", 00:29:26.677 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:26.677 }, 00:29:26.677 "ctrlr_data": { 00:29:26.677 "cntlid": 1, 00:29:26.677 "vendor_id": "0x8086", 00:29:26.677 "model_number": "SPDK bdev Controller", 00:29:26.677 "serial_number": "SPDK0", 00:29:26.677 "firmware_revision": "24.09", 00:29:26.677 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.677 "oacs": { 00:29:26.677 "security": 0, 00:29:26.677 "format": 0, 00:29:26.677 "firmware": 0, 00:29:26.677 "ns_manage": 0 00:29:26.677 }, 00:29:26.677 "multi_ctrlr": true, 00:29:26.677 "ana_reporting": false 00:29:26.677 }, 00:29:26.677 "vs": { 00:29:26.677 "nvme_version": "1.3" 00:29:26.677 }, 00:29:26.677 "ns_data": { 00:29:26.677 "id": 1, 00:29:26.677 "can_share": true 00:29:26.677 } 00:29:26.677 } 00:29:26.677 ], 00:29:26.677 "mp_policy": "active_passive" 00:29:26.677 } 00:29:26.677 } 00:29:26.677 ] 00:29:26.677 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2272585 00:29:26.677 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:26.677 11:37:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:26.938 Running I/O for 10 seconds... 00:29:27.879 Latency(us) 00:29:27.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.879 Nvme0n1 : 1.00 18175.00 71.00 0.00 0.00 0.00 0.00 0.00 00:29:27.879 =================================================================================================================== 00:29:27.879 Total : 18175.00 71.00 0.00 0.00 0.00 0.00 0.00 00:29:27.879 00:29:28.823 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:28.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.823 Nvme0n1 : 2.00 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:29:28.823 =================================================================================================================== 00:29:28.823 Total : 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:29:28.823 00:29:28.823 true 00:29:28.823 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:28.823 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:29.083 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:29.083 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:29.083 11:37:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2272585 00:29:30.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.023 Nvme0n1 : 3.00 18286.67 71.43 0.00 0.00 0.00 0.00 0.00 00:29:30.023 =================================================================================================================== 00:29:30.023 Total : 18286.67 71.43 0.00 0.00 0.00 0.00 0.00 00:29:30.023 00:29:30.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.966 Nvme0n1 : 4.00 18319.00 71.56 0.00 0.00 0.00 0.00 0.00 00:29:30.966 =================================================================================================================== 00:29:30.966 Total : 18319.00 71.56 0.00 0.00 0.00 0.00 0.00 00:29:30.966 00:29:31.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.909 Nvme0n1 : 5.00 18339.80 71.64 0.00 0.00 0.00 0.00 0.00 00:29:31.909 =================================================================================================================== 00:29:31.909 Total : 18339.80 71.64 0.00 0.00 0.00 0.00 0.00 00:29:31.909 00:29:32.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.853 Nvme0n1 : 6.00 18355.33 71.70 0.00 0.00 0.00 0.00 0.00 00:29:32.853 =================================================================================================================== 00:29:32.853 Total : 18355.33 71.70 0.00 0.00 0.00 0.00 0.00 00:29:32.853 00:29:33.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.792 Nvme0n1 : 7.00 18368.71 71.75 0.00 0.00 0.00 0.00 0.00 00:29:33.792 =================================================================================================================== 00:29:33.792 Total : 18368.71 71.75 0.00 0.00 0.00 0.00 0.00 00:29:33.792 00:29:34.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.737 Nvme0n1 : 8.00 18382.38 71.81 0.00 0.00 0.00 0.00 0.00 00:29:34.737 =================================================================================================================== 00:29:34.737 Total : 18382.38 71.81 0.00 0.00 0.00 0.00 0.00 00:29:34.737 00:29:36.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.120 Nvme0n1 : 9.00 18387.78 71.83 0.00 0.00 0.00 0.00 0.00 00:29:36.120 =================================================================================================================== 00:29:36.120 Total : 18387.78 71.83 0.00 0.00 0.00 0.00 0.00 00:29:36.120 00:29:37.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.063 Nvme0n1 : 10.00 18397.90 71.87 0.00 0.00 0.00 0.00 0.00 00:29:37.063 =================================================================================================================== 00:29:37.063 Total : 18397.90 71.87 0.00 0.00 0.00 0.00 0.00 00:29:37.063 00:29:37.063 00:29:37.063 Latency(us) 00:29:37.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.063 Nvme0n1 : 10.01 18399.72 71.87 0.00 0.00 6952.48 2184.53 12342.61 00:29:37.064 =================================================================================================================== 00:29:37.064 Total : 18399.72 71.87 0.00 0.00 6952.48 2184.53 12342.61 00:29:37.064 0 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2272573 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2272573 ']' 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2272573 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2272573 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2272573' 00:29:37.064 killing process with pid 2272573 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2272573 00:29:37.064 Received shutdown signal, test time was about 10.000000 seconds 00:29:37.064 00:29:37.064 Latency(us) 00:29:37.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.064 =================================================================================================================== 00:29:37.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2272573 00:29:37.064 11:38:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:37.324 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2268590 00:29:37.586 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2268590 00:29:37.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2268590 Killed "${NVMF_APP[@]}" "$@" 00:29:37.847 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:37.847 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2274871 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2274871 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2274871 ']' 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:37.848 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:37.848 [2024-06-10 11:38:06.629740] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:37.848 [2024-06-10 11:38:06.629794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.848 [2024-06-10 11:38:06.694278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.848 [2024-06-10 11:38:06.758068] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.848 [2024-06-10 11:38:06.758105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.848 [2024-06-10 11:38:06.758117] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.848 [2024-06-10 11:38:06.758123] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.848 [2024-06-10 11:38:06.758129] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.848 [2024-06-10 11:38:06.758145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.109 11:38:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:38.110 [2024-06-10 11:38:07.077714] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:38.110 [2024-06-10 11:38:07.077805] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:38.110 [2024-06-10 11:38:07.077835] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.370 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fedb7461-150b-436f-8483-7c04b0e65aa2 -t 2000 00:29:38.631 [ 00:29:38.631 { 00:29:38.631 "name": "fedb7461-150b-436f-8483-7c04b0e65aa2", 00:29:38.631 "aliases": [ 00:29:38.631 "lvs/lvol" 00:29:38.631 ], 00:29:38.631 "product_name": "Logical Volume", 00:29:38.631 "block_size": 4096, 00:29:38.631 "num_blocks": 38912, 00:29:38.631 "uuid": "fedb7461-150b-436f-8483-7c04b0e65aa2", 00:29:38.631 "assigned_rate_limits": { 00:29:38.631 "rw_ios_per_sec": 0, 00:29:38.631 "rw_mbytes_per_sec": 0, 00:29:38.631 "r_mbytes_per_sec": 0, 00:29:38.631 "w_mbytes_per_sec": 0 00:29:38.631 }, 00:29:38.631 "claimed": false, 00:29:38.631 "zoned": false, 00:29:38.631 "supported_io_types": { 00:29:38.631 "read": true, 00:29:38.631 "write": true, 00:29:38.631 "unmap": true, 00:29:38.631 "write_zeroes": true, 00:29:38.631 "flush": false, 00:29:38.631 "reset": true, 00:29:38.631 "compare": false, 00:29:38.631 "compare_and_write": false, 00:29:38.631 "abort": false, 00:29:38.631 "nvme_admin": false, 00:29:38.631 "nvme_io": false 00:29:38.631 }, 00:29:38.631 "driver_specific": { 00:29:38.631 "lvol": { 00:29:38.631 "lvol_store_uuid": "f88fba80-34ae-4672-8cde-2bbb57738bc9", 00:29:38.631 "base_bdev": "aio_bdev", 00:29:38.631 "thin_provision": false, 00:29:38.631 "num_allocated_clusters": 38, 00:29:38.631 "snapshot": false, 00:29:38.631 "clone": false, 00:29:38.631 "esnap_clone": false 00:29:38.631 } 00:29:38.631 } 00:29:38.631 } 00:29:38.631 ] 00:29:38.631 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:29:38.631 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:38.631 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:38.891 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:38.891 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:38.891 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:39.153 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:39.153 11:38:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:39.153 [2024-06-10 11:38:08.090259] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:39.414 request: 00:29:39.414 { 00:29:39.414 "uuid": "f88fba80-34ae-4672-8cde-2bbb57738bc9", 00:29:39.414 "method": "bdev_lvol_get_lvstores", 00:29:39.414 "req_id": 1 00:29:39.414 } 00:29:39.414 Got JSON-RPC error response 00:29:39.414 response: 00:29:39.414 { 00:29:39.414 "code": -19, 00:29:39.414 "message": "No such device" 00:29:39.414 } 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:39.414 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:39.674 aio_bdev 00:29:39.674 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:39.674 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:39.674 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:29:39.674 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:29:39.674 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:29:39.675 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:29:39.675 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:39.935 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fedb7461-150b-436f-8483-7c04b0e65aa2 -t 2000 00:29:40.196 [ 00:29:40.196 { 00:29:40.196 "name": "fedb7461-150b-436f-8483-7c04b0e65aa2", 00:29:40.196 "aliases": [ 00:29:40.196 "lvs/lvol" 00:29:40.196 ], 00:29:40.196 "product_name": "Logical Volume", 00:29:40.196 "block_size": 4096, 00:29:40.196 "num_blocks": 38912, 00:29:40.196 "uuid": "fedb7461-150b-436f-8483-7c04b0e65aa2", 00:29:40.196 "assigned_rate_limits": { 00:29:40.196 "rw_ios_per_sec": 0, 00:29:40.196 "rw_mbytes_per_sec": 0, 00:29:40.196 "r_mbytes_per_sec": 0, 00:29:40.196 "w_mbytes_per_sec": 0 00:29:40.196 }, 00:29:40.196 "claimed": false, 00:29:40.196 "zoned": false, 00:29:40.196 "supported_io_types": { 00:29:40.196 "read": true, 00:29:40.196 "write": true, 00:29:40.196 "unmap": true, 00:29:40.196 "write_zeroes": true, 00:29:40.196 "flush": false, 00:29:40.196 "reset": true, 00:29:40.196 "compare": false, 00:29:40.196 "compare_and_write": false, 00:29:40.196 "abort": false, 00:29:40.196 "nvme_admin": false, 00:29:40.196 "nvme_io": false 00:29:40.196 }, 00:29:40.196 "driver_specific": { 00:29:40.196 "lvol": { 00:29:40.196 "lvol_store_uuid": "f88fba80-34ae-4672-8cde-2bbb57738bc9", 00:29:40.196 "base_bdev": "aio_bdev", 00:29:40.196 "thin_provision": false, 00:29:40.196 "num_allocated_clusters": 38, 00:29:40.196 "snapshot": false, 00:29:40.196 "clone": false, 00:29:40.196 "esnap_clone": false 00:29:40.196 } 00:29:40.196 } 00:29:40.196 } 00:29:40.196 ] 00:29:40.196 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:29:40.196 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:40.196 11:38:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:40.456 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:40.456 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:40.456 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:40.456 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:40.456 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fedb7461-150b-436f-8483-7c04b0e65aa2 00:29:40.717 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f88fba80-34ae-4672-8cde-2bbb57738bc9 00:29:40.977 11:38:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:41.237 00:29:41.237 real 0m17.274s 00:29:41.237 user 0m46.055s 00:29:41.237 sys 0m2.957s 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:41.237 ************************************ 00:29:41.237 END TEST lvs_grow_dirty 00:29:41.237 ************************************ 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:41.237 nvmf_trace.0 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.237 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:29:41.238 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.238 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:29:41.238 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.238 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.238 rmmod nvme_tcp 00:29:41.238 rmmod nvme_fabrics 00:29:41.238 rmmod nvme_keyring 00:29:41.238 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2274871 ']' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2274871 ']' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2274871' 00:29:41.498 killing process with pid 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2274871 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.498 11:38:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.043 11:38:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.043 00:29:44.043 real 0m44.123s 00:29:44.043 user 1m7.920s 00:29:44.043 sys 0m10.073s 00:29:44.043 11:38:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:44.043 11:38:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:44.043 ************************************ 00:29:44.043 END TEST nvmf_lvs_grow 00:29:44.043 ************************************ 00:29:44.043 11:38:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:44.043 11:38:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:44.043 11:38:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:44.043 11:38:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.043 ************************************ 00:29:44.043 START TEST nvmf_bdev_io_wait 00:29:44.043 ************************************ 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:29:44.044 * Looking for test storage... 00:29:44.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:29:44.044 11:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:50.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:50.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:50.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:50.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:50.707 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:50.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:29:50.967 00:29:50.967 --- 10.0.0.2 ping statistics --- 00:29:50.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.967 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:29:50.967 00:29:50.967 --- 10.0.0.1 ping statistics --- 00:29:50.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.967 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2279661 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2279661 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2279661 ']' 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:50.967 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:50.967 [2024-06-10 11:38:19.811897] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:50.967 [2024-06-10 11:38:19.811947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.967 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.967 [2024-06-10 11:38:19.877003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.228 [2024-06-10 11:38:19.943482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.228 [2024-06-10 11:38:19.943522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.228 [2024-06-10 11:38:19.943530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.228 [2024-06-10 11:38:19.943536] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.228 [2024-06-10 11:38:19.943542] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.228 [2024-06-10 11:38:19.943654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.228 [2024-06-10 11:38:19.943809] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.228 [2024-06-10 11:38:19.943902] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.228 [2024-06-10 11:38:19.943903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.228 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:51.228 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:29:51.228 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:51.228 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:51.228 11:38:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 [2024-06-10 11:38:20.091366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 Malloc0 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:51.228 [2024-06-10 11:38:20.160991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2279828 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2279831 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.228 { 00:29:51.228 "params": { 00:29:51.228 "name": "Nvme$subsystem", 00:29:51.228 "trtype": "$TEST_TRANSPORT", 00:29:51.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.228 "adrfam": "ipv4", 00:29:51.228 "trsvcid": "$NVMF_PORT", 00:29:51.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.228 "hdgst": ${hdgst:-false}, 00:29:51.228 "ddgst": ${ddgst:-false} 00:29:51.228 }, 00:29:51.228 "method": "bdev_nvme_attach_controller" 00:29:51.228 } 00:29:51.228 EOF 00:29:51.228 )") 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2279834 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2279838 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.228 { 00:29:51.228 "params": { 00:29:51.228 "name": "Nvme$subsystem", 00:29:51.228 "trtype": "$TEST_TRANSPORT", 00:29:51.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.228 "adrfam": "ipv4", 00:29:51.228 "trsvcid": "$NVMF_PORT", 00:29:51.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.228 "hdgst": ${hdgst:-false}, 00:29:51.228 "ddgst": ${ddgst:-false} 00:29:51.228 }, 00:29:51.228 "method": "bdev_nvme_attach_controller" 00:29:51.228 } 00:29:51.228 EOF 00:29:51.228 )") 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.228 { 00:29:51.228 "params": { 00:29:51.228 "name": "Nvme$subsystem", 00:29:51.228 "trtype": "$TEST_TRANSPORT", 00:29:51.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.228 "adrfam": "ipv4", 00:29:51.228 "trsvcid": "$NVMF_PORT", 00:29:51.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.228 "hdgst": ${hdgst:-false}, 00:29:51.228 "ddgst": ${ddgst:-false} 00:29:51.228 }, 00:29:51.228 "method": "bdev_nvme_attach_controller" 00:29:51.228 } 00:29:51.228 EOF 00:29:51.228 )") 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.228 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.228 { 00:29:51.228 "params": { 00:29:51.229 "name": "Nvme$subsystem", 00:29:51.229 "trtype": "$TEST_TRANSPORT", 00:29:51.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.229 "adrfam": "ipv4", 00:29:51.229 "trsvcid": "$NVMF_PORT", 00:29:51.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.229 "hdgst": ${hdgst:-false}, 00:29:51.229 "ddgst": ${ddgst:-false} 00:29:51.229 }, 00:29:51.229 "method": "bdev_nvme_attach_controller" 00:29:51.229 } 00:29:51.229 EOF 00:29:51.229 )") 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2279828 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.229 "params": { 00:29:51.229 "name": "Nvme1", 00:29:51.229 "trtype": "tcp", 00:29:51.229 "traddr": "10.0.0.2", 00:29:51.229 "adrfam": "ipv4", 00:29:51.229 "trsvcid": "4420", 00:29:51.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.229 "hdgst": false, 00:29:51.229 "ddgst": false 00:29:51.229 }, 00:29:51.229 "method": "bdev_nvme_attach_controller" 00:29:51.229 }' 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.229 "params": { 00:29:51.229 "name": "Nvme1", 00:29:51.229 "trtype": "tcp", 00:29:51.229 "traddr": "10.0.0.2", 00:29:51.229 "adrfam": "ipv4", 00:29:51.229 "trsvcid": "4420", 00:29:51.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.229 "hdgst": false, 00:29:51.229 "ddgst": false 00:29:51.229 }, 00:29:51.229 "method": "bdev_nvme_attach_controller" 00:29:51.229 }' 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.229 "params": { 00:29:51.229 "name": "Nvme1", 00:29:51.229 "trtype": "tcp", 00:29:51.229 "traddr": "10.0.0.2", 00:29:51.229 "adrfam": "ipv4", 00:29:51.229 "trsvcid": "4420", 00:29:51.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.229 "hdgst": false, 00:29:51.229 "ddgst": false 00:29:51.229 }, 00:29:51.229 "method": "bdev_nvme_attach_controller" 00:29:51.229 }' 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:29:51.229 11:38:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.229 "params": { 00:29:51.229 "name": "Nvme1", 00:29:51.229 "trtype": "tcp", 00:29:51.229 "traddr": "10.0.0.2", 00:29:51.229 "adrfam": "ipv4", 00:29:51.229 "trsvcid": "4420", 00:29:51.229 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.229 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.229 "hdgst": false, 00:29:51.229 "ddgst": false 00:29:51.229 }, 00:29:51.229 "method": "bdev_nvme_attach_controller" 00:29:51.229 }' 00:29:51.489 [2024-06-10 11:38:20.214634] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:51.489 [2024-06-10 11:38:20.214687] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:51.489 [2024-06-10 11:38:20.216085] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:51.490 [2024-06-10 11:38:20.216139] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:51.490 [2024-06-10 11:38:20.218186] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:51.490 [2024-06-10 11:38:20.218244] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:51.490 [2024-06-10 11:38:20.219302] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:29:51.490 [2024-06-10 11:38:20.219348] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:51.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.490 [2024-06-10 11:38:20.365888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.490 [2024-06-10 11:38:20.411654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.490 [2024-06-10 11:38:20.416871] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:51.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.750 [2024-06-10 11:38:20.463829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:51.750 [2024-06-10 11:38:20.469637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.750 [2024-06-10 11:38:20.519192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.750 [2024-06-10 11:38:20.521817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.750 [2024-06-10 11:38:20.569136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:51.750 Running I/O for 1 seconds... 00:29:52.010 Running I/O for 1 seconds... 00:29:52.010 Running I/O for 1 seconds... 00:29:52.010 Running I/O for 1 seconds... 00:29:52.952 00:29:52.952 Latency(us) 00:29:52.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.952 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:52.952 Nvme1n1 : 1.00 187171.36 731.14 0.00 0.00 680.94 271.36 750.93 00:29:52.952 =================================================================================================================== 00:29:52.952 Total : 187171.36 731.14 0.00 0.00 680.94 271.36 750.93 00:29:52.952 00:29:52.952 Latency(us) 00:29:52.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.952 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:52.952 Nvme1n1 : 1.01 8311.13 32.47 0.00 0.00 15283.37 6335.15 24794.45 00:29:52.952 =================================================================================================================== 00:29:52.952 Total : 8311.13 32.47 0.00 0.00 15283.37 6335.15 24794.45 00:29:52.952 00:29:52.952 Latency(us) 00:29:52.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.952 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:52.952 Nvme1n1 : 1.01 13255.12 51.78 0.00 0.00 9622.25 5515.95 19551.57 00:29:52.952 =================================================================================================================== 00:29:52.952 Total : 13255.12 51.78 0.00 0.00 9622.25 5515.95 19551.57 00:29:52.952 00:29:52.952 Latency(us) 00:29:52.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.952 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:52.952 Nvme1n1 : 1.01 7996.05 31.23 0.00 0.00 15961.04 5570.56 36918.61 00:29:52.952 =================================================================================================================== 00:29:52.952 Total : 7996.05 31.23 0.00 0.00 15961.04 5570.56 36918.61 00:29:53.213 11:38:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2279831 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2279834 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2279838 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:53.213 rmmod nvme_tcp 00:29:53.213 rmmod nvme_fabrics 00:29:53.213 rmmod nvme_keyring 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2279661 ']' 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2279661 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2279661 ']' 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2279661 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2279661 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2279661' 00:29:53.213 killing process with pid 2279661 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2279661 00:29:53.213 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2279661 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.473 11:38:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.384 11:38:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.646 00:29:55.646 real 0m11.802s 00:29:55.646 user 0m16.884s 00:29:55.646 sys 0m6.543s 00:29:55.646 11:38:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:55.646 11:38:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:55.646 ************************************ 00:29:55.646 END TEST nvmf_bdev_io_wait 00:29:55.646 ************************************ 00:29:55.646 11:38:24 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:55.646 11:38:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:55.646 11:38:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:55.646 11:38:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.646 ************************************ 00:29:55.646 START TEST nvmf_queue_depth 00:29:55.646 ************************************ 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:29:55.646 * Looking for test storage... 00:29:55.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:29:55.646 11:38:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.787 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:03.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:03.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:03.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:03.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:30:03.788 00:30:03.788 --- 10.0.0.2 ping statistics --- 00:30:03.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.788 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:30:03.788 00:30:03.788 --- 10.0.0.1 ping statistics --- 00:30:03.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.788 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2284367 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2284367 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2284367 ']' 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:03.788 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.788 [2024-06-10 11:38:31.568985] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:03.788 [2024-06-10 11:38:31.569025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.788 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.788 [2024-06-10 11:38:31.624530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.788 [2024-06-10 11:38:31.687879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.788 [2024-06-10 11:38:31.687912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.788 [2024-06-10 11:38:31.687919] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.788 [2024-06-10 11:38:31.687925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.789 [2024-06-10 11:38:31.687931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.789 [2024-06-10 11:38:31.687947] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 [2024-06-10 11:38:31.808826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 Malloc0 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 [2024-06-10 11:38:31.878679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2284386 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2284386 /var/tmp/bdevperf.sock 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2284386 ']' 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:03.789 11:38:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 [2024-06-10 11:38:31.938222] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:03.789 [2024-06-10 11:38:31.938267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284386 ] 00:30:03.789 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.789 [2024-06-10 11:38:31.995867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.789 [2024-06-10 11:38:32.060293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:03.789 NVMe0n1 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:03.789 11:38:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.789 Running I/O for 10 seconds... 00:30:13.783 00:30:13.783 Latency(us) 00:30:13.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.783 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:13.783 Verification LBA range: start 0x0 length 0x4000 00:30:13.783 NVMe0n1 : 10.08 9645.40 37.68 0.00 0.00 105755.82 24466.77 71652.69 00:30:13.783 =================================================================================================================== 00:30:13.783 Total : 9645.40 37.68 0.00 0.00 105755.82 24466.77 71652.69 00:30:13.783 0 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2284386 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2284386 ']' 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2284386 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2284386 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2284386' 00:30:13.783 killing process with pid 2284386 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2284386 00:30:13.783 Received shutdown signal, test time was about 10.000000 seconds 00:30:13.783 00:30:13.783 Latency(us) 00:30:13.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.783 =================================================================================================================== 00:30:13.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.783 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2284386 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:14.044 rmmod nvme_tcp 00:30:14.044 rmmod nvme_fabrics 00:30:14.044 rmmod nvme_keyring 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2284367 ']' 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2284367 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2284367 ']' 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2284367 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2284367 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2284367' 00:30:14.044 killing process with pid 2284367 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2284367 00:30:14.044 11:38:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2284367 00:30:14.044 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:14.044 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:14.044 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:14.045 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:14.045 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:14.045 11:38:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.045 11:38:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.045 11:38:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.588 11:38:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:16.588 00:30:16.588 real 0m20.643s 00:30:16.588 user 0m23.896s 00:30:16.588 sys 0m6.172s 00:30:16.588 11:38:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:16.588 11:38:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:16.588 ************************************ 00:30:16.588 END TEST nvmf_queue_depth 00:30:16.588 ************************************ 00:30:16.588 11:38:45 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:30:16.588 11:38:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:16.588 11:38:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:16.588 11:38:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:16.588 ************************************ 00:30:16.588 START TEST nvmf_target_multipath 00:30:16.588 ************************************ 00:30:16.588 11:38:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:30:16.588 * Looking for test storage... 00:30:16.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:16.588 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.588 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:16.588 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:30:16.589 11:38:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.182 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:23.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:23.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:23.183 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:23.183 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:23.183 11:38:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:23.183 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.183 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.183 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.183 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.183 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:23.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:30:23.444 00:30:23.444 --- 10.0.0.2 ping statistics --- 00:30:23.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.444 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:30:23.444 00:30:23.444 --- 10.0.0.1 ping statistics --- 00:30:23.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.444 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:23.444 only one NIC for nvmf test 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:23.444 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.445 rmmod nvme_tcp 00:30:23.445 rmmod nvme_fabrics 00:30:23.445 rmmod nvme_keyring 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.445 11:38:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.993 00:30:25.993 real 0m9.310s 00:30:25.993 user 0m1.974s 00:30:25.993 sys 0m5.244s 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:25.993 11:38:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:25.993 ************************************ 00:30:25.993 END TEST nvmf_target_multipath 00:30:25.993 ************************************ 00:30:25.993 11:38:54 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:25.993 11:38:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:25.993 11:38:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:25.993 11:38:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.993 ************************************ 00:30:25.993 START TEST nvmf_zcopy 00:30:25.993 ************************************ 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:30:25.993 * Looking for test storage... 00:30:25.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.993 11:38:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.994 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:25.994 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:25.994 11:38:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.994 11:38:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:32.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:32.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:32.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:32.674 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:32.674 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.935 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:32.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:30:32.936 00:30:32.936 --- 10.0.0.2 ping statistics --- 00:30:32.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.936 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:30:32.936 00:30:32.936 --- 10.0.0.1 ping statistics --- 00:30:32.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.936 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2294850 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2294850 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2294850 ']' 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:32.936 11:39:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.936 [2024-06-10 11:39:01.834700] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:32.936 [2024-06-10 11:39:01.834768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.936 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.936 [2024-06-10 11:39:01.904699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.197 [2024-06-10 11:39:01.977597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.197 [2024-06-10 11:39:01.977639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.197 [2024-06-10 11:39:01.977648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.197 [2024-06-10 11:39:01.977656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.197 [2024-06-10 11:39:01.977662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.197 [2024-06-10 11:39:01.977686] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:33.770 [2024-06-10 11:39:02.732556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:33.770 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 [2024-06-10 11:39:02.748714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 malloc0 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.031 { 00:30:34.031 "params": { 00:30:34.031 "name": "Nvme$subsystem", 00:30:34.031 "trtype": "$TEST_TRANSPORT", 00:30:34.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.031 "adrfam": "ipv4", 00:30:34.031 "trsvcid": "$NVMF_PORT", 00:30:34.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.031 "hdgst": ${hdgst:-false}, 00:30:34.031 "ddgst": ${ddgst:-false} 00:30:34.031 }, 00:30:34.031 "method": "bdev_nvme_attach_controller" 00:30:34.031 } 00:30:34.031 EOF 00:30:34.031 )") 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:30:34.031 11:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:34.031 "params": { 00:30:34.031 "name": "Nvme1", 00:30:34.031 "trtype": "tcp", 00:30:34.031 "traddr": "10.0.0.2", 00:30:34.031 "adrfam": "ipv4", 00:30:34.031 "trsvcid": "4420", 00:30:34.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.031 "hdgst": false, 00:30:34.031 "ddgst": false 00:30:34.031 }, 00:30:34.031 "method": "bdev_nvme_attach_controller" 00:30:34.031 }' 00:30:34.031 [2024-06-10 11:39:02.828676] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:34.031 [2024-06-10 11:39:02.828725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295168 ] 00:30:34.031 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.031 [2024-06-10 11:39:02.886678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.031 [2024-06-10 11:39:02.950929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.293 Running I/O for 10 seconds... 00:30:46.527 00:30:46.527 Latency(us) 00:30:46.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.527 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:46.527 Verification LBA range: start 0x0 length 0x1000 00:30:46.527 Nvme1n1 : 10.02 6238.57 48.74 0.00 0.00 20457.95 3822.93 29709.65 00:30:46.527 =================================================================================================================== 00:30:46.527 Total : 6238.57 48.74 0.00 0.00 20457.95 3822.93 29709.65 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2297637 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.527 { 00:30:46.527 "params": { 00:30:46.527 "name": "Nvme$subsystem", 00:30:46.527 "trtype": "$TEST_TRANSPORT", 00:30:46.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.527 "adrfam": "ipv4", 00:30:46.527 "trsvcid": "$NVMF_PORT", 00:30:46.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.527 "hdgst": ${hdgst:-false}, 00:30:46.527 "ddgst": ${ddgst:-false} 00:30:46.527 }, 00:30:46.527 "method": "bdev_nvme_attach_controller" 00:30:46.527 } 00:30:46.527 EOF 00:30:46.527 )") 00:30:46.527 [2024-06-10 11:39:13.436190] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.436223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:30:46.527 11:39:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.527 "params": { 00:30:46.527 "name": "Nvme1", 00:30:46.527 "trtype": "tcp", 00:30:46.527 "traddr": "10.0.0.2", 00:30:46.527 "adrfam": "ipv4", 00:30:46.527 "trsvcid": "4420", 00:30:46.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:46.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:46.527 "hdgst": false, 00:30:46.527 "ddgst": false 00:30:46.527 }, 00:30:46.527 "method": "bdev_nvme_attach_controller" 00:30:46.527 }' 00:30:46.527 [2024-06-10 11:39:13.448192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.448203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.456208] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.456217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.464229] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.464238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.472248] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.472257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.478049] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:30:46.527 [2024-06-10 11:39:13.478106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297637 ] 00:30:46.527 [2024-06-10 11:39:13.480268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.480278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.488289] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.488298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.496311] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.496320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.527 [2024-06-10 11:39:13.504330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.504339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.512350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.512359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.520373] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.520386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.528395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.528405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.536415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.536424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.536902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.527 [2024-06-10 11:39:13.544436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.544446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.552458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.552467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.560479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.560489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.568502] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.568513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.576523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.576536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.584543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.584553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.592564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.592574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.600586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.600596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.601781] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.527 [2024-06-10 11:39:13.608607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.608616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.616632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.616645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.624652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.624664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.632674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.632685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.640703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.640712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.648718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.648727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.656736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.656745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.664754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.664769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.672793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.672810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.680804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.680815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.688824] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.688835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.527 [2024-06-10 11:39:13.696849] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.527 [2024-06-10 11:39:13.696862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.704870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.704881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.712890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.712902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.720909] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.720921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.728936] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.728950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.736958] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.736974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 Running I/O for 5 seconds... 00:30:46.528 [2024-06-10 11:39:13.744974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.744985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.758289] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.758309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.769420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.769439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.777435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.777453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.789131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.789150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.798296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.798314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.809412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.809430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.817444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.817461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.829069] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.829088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.837939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.837961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.849736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.849754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.858963] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.858981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.868614] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.868632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.877854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.877873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.887161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.887179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.896033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.896050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.905872] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.905889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.915377] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.915395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.924898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.924915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.934333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.934350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.943701] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.943720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.953062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.953079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.962695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.962714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.972166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.972184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.981841] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.981859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:13.991216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:13.991233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.000446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.000463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.009887] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.009905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.019303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.019324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.028732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.028749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.037982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.038000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.047629] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.047647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.057192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.057210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.066667] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.066690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.076359] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.076379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.085995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.086013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.095700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.095718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.105087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.105105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.114809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.114838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.124346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.124364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.133648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.133666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.143040] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.143058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.152493] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.152512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.161920] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.161938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.171763] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.171780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.528 [2024-06-10 11:39:14.181409] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.528 [2024-06-10 11:39:14.181426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.191204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.191222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.200811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.200830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.210465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.210484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.219701] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.219718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.229115] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.229132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.238561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.238579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.248043] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.248061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.257358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.257376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.266618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.266636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.276195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.276212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.285828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.285846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.295258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.295276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.304919] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.304937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.314179] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.314196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.323521] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.323539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.333234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.333252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.342220] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.342238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.352187] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.352205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.363598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.363616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.372068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.372086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.381915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.381933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.391156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.391174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.400051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.400068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.409702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.409719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.419003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.419020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.430160] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.430177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.440243] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.440260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.448487] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.448504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.460146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.460163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.468730] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.468747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.478502] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.478520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.487760] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.487778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.497139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.497157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.506778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.506796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.516087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.516104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.525085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.525102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.534943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.534960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.546648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.546666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.557391] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.557408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.565627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.565644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.575466] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.575483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.584818] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.584836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.594357] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.594375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.604051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.604068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.613592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.613609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.622930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.622947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.632181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.632198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.641609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.641627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.651085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.651102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.660532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.660550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.669968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.669986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.679401] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.529 [2024-06-10 11:39:14.679419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.529 [2024-06-10 11:39:14.688782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.688799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.698260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.698277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.707837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.707855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.717370] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.717387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.726351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.726368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.736273] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.736289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.747521] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.747538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.756098] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.756115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.767648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.767666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.776260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.776277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.787739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.787757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.796285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.796302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.807944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.807962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.816169] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.816186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.826084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.826101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.835393] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.835410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.844913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.844931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.854624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.854641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.863925] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.863942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.873247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.873264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.882735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.882752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.891969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.891986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.901559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.901576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.910820] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.910837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.920530] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.920553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.930008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.930025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.939365] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.939382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.948767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.948785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.958175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.958192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.967519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.967537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.976934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.976951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.986353] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.986370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:14.995865] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:14.995882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.005343] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.005361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.014792] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.014810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.024327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.024344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.033767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.033785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.043195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.043213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.052707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.052724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.062310] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.062328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.071454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.071471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.080462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.080479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.090175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.090192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.099592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.099613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.110927] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.110944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.119490] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.119507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.129368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.129385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.138724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.138741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.148050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.148068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.157256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.157273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.166778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.166795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.176275] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.176292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.530 [2024-06-10 11:39:15.185690] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.530 [2024-06-10 11:39:15.185708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.195334] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.195351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.204723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.204741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.214223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.214240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.223791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.223808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.233214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.233231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.242499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.242516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.251934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.251950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.261268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.261285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.270650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.270667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.279970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.279990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.289410] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.289428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.298952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.298970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.308249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.308267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.317661] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.317683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.326867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.326884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.336147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.336164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.345491] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.345508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.355023] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.355041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.364469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.364486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.373975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.373992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.383407] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.383425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.392877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.392896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.402176] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.402194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.411540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.411558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.421214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.421232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.430767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.430784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.440210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.440228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.449806] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.449824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.459417] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.459439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.468872] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.468889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.478260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.478278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.531 [2024-06-10 11:39:15.487637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.531 [2024-06-10 11:39:15.487655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.497074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.497092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.506500] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.506517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.516100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.516118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.525202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.525219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.534550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.534567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.544099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.544117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.553575] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.553593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.562881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.562899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.572328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.572346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.582030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.582048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.591711] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.591729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.601287] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.601304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.612394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.612412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.620281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.620298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.632055] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.632073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.640573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.640590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.650534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.650552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.660120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.660138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.669416] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.669434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.678304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.678322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.688029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.688047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.697235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.697254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.706653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.706676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.715953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.715970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.725404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.725421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.734680] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.734697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.743896] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.743914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.753440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.753457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:46.793 [2024-06-10 11:39:15.762883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:46.793 [2024-06-10 11:39:15.762900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.772235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.772252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.781777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.781795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.790982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.791000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.800373] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.800391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.809868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.809885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.818987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.819004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.828330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.828347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.837759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.837777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.847233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.847250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.856765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.856782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.866192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.866209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.875933] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.875951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.885440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.885457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.894579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.894596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.904333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.904351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.913662] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.913686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.923174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.923192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.932552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.932570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.942157] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.942174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.054 [2024-06-10 11:39:15.951516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.054 [2024-06-10 11:39:15.951534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:15.960235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:15.960253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:15.970180] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:15.970197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:15.981694] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:15.981711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:15.989890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:15.989907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:16.001304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:16.001321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:16.011747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:16.011765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.055 [2024-06-10 11:39:16.019890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.055 [2024-06-10 11:39:16.019907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.315 [2024-06-10 11:39:16.031429] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.315 [2024-06-10 11:39:16.031447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.315 [2024-06-10 11:39:16.040188] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.315 [2024-06-10 11:39:16.040206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.315 [2024-06-10 11:39:16.050031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.315 [2024-06-10 11:39:16.050049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.315 [2024-06-10 11:39:16.059594] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.315 [2024-06-10 11:39:16.059611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.068934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.068951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.078589] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.078606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.088036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.088053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.097389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.097407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.106854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.106872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.116283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.116300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.125856] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.125873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.135130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.135147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.144478] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.144496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.153931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.153948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.163460] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.163477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.172945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.172962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.182450] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.182467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.192235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.192252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.201551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.201568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.210982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.211000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.220543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.220560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.229721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.229738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.239096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.239113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.248491] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.248509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.257939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.257957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.267564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.267581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.276918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.276936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.316 [2024-06-10 11:39:16.286239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.316 [2024-06-10 11:39:16.286256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.295533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.295550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.304990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.305007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.314352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.314369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.323714] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.323731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.332850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.332867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.342132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.342149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.351453] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.351474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.360775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.360791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.369976] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.369994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.379208] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.379226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.388914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.388932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.398333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.398351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.407640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.407657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.417163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.417180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.426690] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.426707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.436138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.436155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.445096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.445113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.454795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.454812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.464295] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.464313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.475471] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.475488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.483667] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.483688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.495293] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.495310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.503685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.503703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.513365] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.513383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.522934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.522952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.532067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.532088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.541277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.541295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.550535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.550553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.564085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.564103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.580379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.580399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.597666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.597691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.680 [2024-06-10 11:39:16.612901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.680 [2024-06-10 11:39:16.612919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.624191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.624209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.640351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.640369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.657250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.657267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.673754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.673772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.691096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.691113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.708134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.708151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.725399] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.725416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.742129] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.742147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.758751] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.758768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.776637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.776655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.794031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.794049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.810738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.810756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.828308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.828330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.941 [2024-06-10 11:39:16.843561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.941 [2024-06-10 11:39:16.843579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.942 [2024-06-10 11:39:16.854354] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.942 [2024-06-10 11:39:16.854372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.942 [2024-06-10 11:39:16.870691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.942 [2024-06-10 11:39:16.870708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.942 [2024-06-10 11:39:16.887844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.942 [2024-06-10 11:39:16.887862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:47.942 [2024-06-10 11:39:16.905374] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:47.942 [2024-06-10 11:39:16.905392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:16.922649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:16.922667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:16.939141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:16.939158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:16.955727] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:16.955745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:16.972666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:16.972689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:16.988878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:16.988895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.006418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.006436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.022743] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.022761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.039816] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.039834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.056041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.056059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.073798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.073816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.088612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.088630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.105602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.105619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.202 [2024-06-10 11:39:17.122158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.202 [2024-06-10 11:39:17.122176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-06-10 11:39:17.138910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-06-10 11:39:17.138932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-06-10 11:39:17.156395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-06-10 11:39:17.156413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.203 [2024-06-10 11:39:17.173543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.203 [2024-06-10 11:39:17.173560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.190963] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.190981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.206534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.206552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.224086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.224104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.240728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.240746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.258193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.258211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.275565] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.275583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.463 [2024-06-10 11:39:17.290872] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.463 [2024-06-10 11:39:17.290891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.302315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.302333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.319209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.319227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.336117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.336135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.353191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.353209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.370144] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.370162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.387120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.387138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.403665] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.403688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.464 [2024-06-10 11:39:17.420719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.464 [2024-06-10 11:39:17.420737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.437557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.437575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.454764] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.454782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.471422] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.471439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.488325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.488343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.505200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.505218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.521463] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.521480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.538718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.538736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.724 [2024-06-10 11:39:17.555543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.724 [2024-06-10 11:39:17.555561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.572629] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.572647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.589913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.589931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.606691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.606709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.622703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.622721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.640497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.640515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.656580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.656598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.672968] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.672987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.725 [2024-06-10 11:39:17.684759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.725 [2024-06-10 11:39:17.684778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.701003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.701021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.718113] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.718130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.734709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.734727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.751645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.751663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.768308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.768326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.785586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.785604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.801718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.801737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.813709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.813728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.830832] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.830850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.847674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.847692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.864798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.864816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.882062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.882080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.898497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.898514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.915286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.915303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.931989] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.932007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:48.985 [2024-06-10 11:39:17.948334] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:48.985 [2024-06-10 11:39:17.948352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:17.960266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:17.960284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:17.976611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:17.976629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:17.993101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:17.993119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.010508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.010526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.027305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.027323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.043497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.043514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.060296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.060313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.076854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.076872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.094301] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.094318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.111411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.111429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.128752] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.128770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.145762] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.145781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.162790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.162808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.179694] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.179712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.196241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.196258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.246 [2024-06-10 11:39:18.213411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.246 [2024-06-10 11:39:18.213429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.229046] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.229064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.245448] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.245466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.262325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.262344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.279448] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.279466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.296765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.296783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.313753] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.313771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.331125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.331142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.348238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.348255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.364949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.364966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.381689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.381706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.398328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.398345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.414715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.414733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.431983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.432001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.448240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.448258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.507 [2024-06-10 11:39:18.465136] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.507 [2024-06-10 11:39:18.465154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.481972] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.481990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.499878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.499897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.524031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.524049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.541620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.541638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.558116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.558134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.575324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.575342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.591033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.591051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.606422] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.606439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.623913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.623931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.640387] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.640406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.657605] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.657623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.672623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.672641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.689322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.689339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.705760] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.705782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:49.767 [2024-06-10 11:39:18.723508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:49.767 [2024-06-10 11:39:18.723526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.740249] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.740266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.758016] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.758034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 00:30:50.028 Latency(us) 00:30:50.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.028 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:50.028 Nvme1n1 : 5.01 13530.08 105.70 0.00 0.00 9449.77 4096.00 21736.11 00:30:50.028 =================================================================================================================== 00:30:50.028 Total : 13530.08 105.70 0.00 0.00 9449.77 4096.00 21736.11 00:30:50.028 [2024-06-10 11:39:18.768992] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.769008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.781020] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.781035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.793057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.793072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.805086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.805099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.817116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.817129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.829145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.829158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.841177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.841187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.853210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.853222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.865243] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.865255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.877276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.877289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 [2024-06-10 11:39:18.889304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:50.028 [2024-06-10 11:39:18.889315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:50.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2297637) - No such process 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2297637 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.028 delay0 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:50.028 11:39:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:50.028 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.289 [2024-06-10 11:39:19.032087] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:56.871 Initializing NVMe Controllers 00:30:56.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:56.872 Initialization complete. Launching workers. 00:30:56.872 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 392 00:30:56.872 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 679, failed to submit 33 00:30:56.872 success 499, unsuccess 180, failed 0 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:56.872 rmmod nvme_tcp 00:30:56.872 rmmod nvme_fabrics 00:30:56.872 rmmod nvme_keyring 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2294850 ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2294850 ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2294850' 00:30:56.872 killing process with pid 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2294850 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.872 11:39:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.786 11:39:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.786 00:30:58.786 real 0m32.937s 00:30:58.786 user 0m45.061s 00:30:58.786 sys 0m9.848s 00:30:58.786 11:39:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:58.786 11:39:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:58.786 ************************************ 00:30:58.786 END TEST nvmf_zcopy 00:30:58.786 ************************************ 00:30:58.786 11:39:27 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:58.786 11:39:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:58.786 11:39:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:58.786 11:39:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.786 ************************************ 00:30:58.786 START TEST nvmf_nmic 00:30:58.786 ************************************ 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:30:58.786 * Looking for test storage... 00:30:58.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:30:58.786 11:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.930 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:06.931 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:06.931 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:06.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:06.931 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:06.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:31:06.931 00:31:06.931 --- 10.0.0.2 ping statistics --- 00:31:06.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.931 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:31:06.931 00:31:06.931 --- 10.0.0.1 ping statistics --- 00:31:06.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.931 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2304102 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2304102 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 2304102 ']' 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:06.931 11:39:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.931 [2024-06-10 11:39:34.920346] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:31:06.931 [2024-06-10 11:39:34.920407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.931 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.931 [2024-06-10 11:39:34.991063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:06.931 [2024-06-10 11:39:35.057148] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.931 [2024-06-10 11:39:35.057185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.931 [2024-06-10 11:39:35.057193] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.931 [2024-06-10 11:39:35.057200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.931 [2024-06-10 11:39:35.057205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.931 [2024-06-10 11:39:35.057249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.931 [2024-06-10 11:39:35.057340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.931 [2024-06-10 11:39:35.057485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.931 [2024-06-10 11:39:35.057485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.931 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.932 [2024-06-10 11:39:35.831593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.932 Malloc0 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:06.932 [2024-06-10 11:39:35.890759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:06.932 test case1: single bdev can't be used in multiple subsystems 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:06.932 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:07.192 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.192 [2024-06-10 11:39:35.926690] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:07.192 [2024-06-10 11:39:35.926708] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:07.192 [2024-06-10 11:39:35.926715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.192 request: 00:31:07.192 { 00:31:07.192 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:07.192 "namespace": { 00:31:07.192 "bdev_name": "Malloc0", 00:31:07.192 "no_auto_visible": false 00:31:07.192 }, 00:31:07.192 "method": "nvmf_subsystem_add_ns", 00:31:07.192 "req_id": 1 00:31:07.193 } 00:31:07.193 Got JSON-RPC error response 00:31:07.193 response: 00:31:07.193 { 00:31:07.193 "code": -32602, 00:31:07.193 "message": "Invalid parameters" 00:31:07.193 } 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:07.193 Adding namespace failed - expected result. 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:07.193 test case2: host connect to nvmf target in multiple paths 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:07.193 [2024-06-10 11:39:35.938807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:07.193 11:39:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:08.576 11:39:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:10.487 11:39:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:10.487 11:39:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:31:10.487 11:39:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:31:10.487 11:39:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:31:10.487 11:39:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:31:12.431 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:31:12.432 11:39:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:12.432 [global] 00:31:12.432 thread=1 00:31:12.432 invalidate=1 00:31:12.432 rw=write 00:31:12.432 time_based=1 00:31:12.432 runtime=1 00:31:12.432 ioengine=libaio 00:31:12.432 direct=1 00:31:12.432 bs=4096 00:31:12.432 iodepth=1 00:31:12.432 norandommap=0 00:31:12.432 numjobs=1 00:31:12.432 00:31:12.432 verify_dump=1 00:31:12.432 verify_backlog=512 00:31:12.432 verify_state_save=0 00:31:12.432 do_verify=1 00:31:12.432 verify=crc32c-intel 00:31:12.432 [job0] 00:31:12.432 filename=/dev/nvme0n1 00:31:12.432 Could not set queue depth (nvme0n1) 00:31:12.694 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:12.694 fio-3.35 00:31:12.694 Starting 1 thread 00:31:14.079 00:31:14.079 job0: (groupid=0, jobs=1): err= 0: pid=2305521: Mon Jun 10 11:39:42 2024 00:31:14.079 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:14.079 slat (nsec): min=23024, max=58772, avg=24324.50, stdev=3218.50 00:31:14.079 clat (usec): min=915, max=1296, avg=1110.85, stdev=54.61 00:31:14.079 lat (usec): min=939, max=1320, avg=1135.18, stdev=54.45 00:31:14.079 clat percentiles (usec): 00:31:14.079 | 1.00th=[ 979], 5.00th=[ 1012], 10.00th=[ 1037], 20.00th=[ 1057], 00:31:14.079 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:31:14.079 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1188], 00:31:14.079 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:31:14.079 | 99.99th=[ 1303] 00:31:14.079 write: IOPS=587, BW=2350KiB/s (2406kB/s)(2352KiB/1001msec); 0 zone resets 00:31:14.079 slat (nsec): min=9127, max=65528, avg=25668.20, stdev=9789.37 00:31:14.079 clat (usec): min=384, max=878, avg=671.87, stdev=96.74 00:31:14.079 lat (usec): min=395, max=908, avg=697.54, stdev=102.39 00:31:14.079 clat percentiles (usec): 00:31:14.079 | 1.00th=[ 424], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 586], 00:31:14.079 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 676], 60.00th=[ 709], 00:31:14.079 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 783], 95.00th=[ 799], 00:31:14.079 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:31:14.079 | 99.99th=[ 881] 00:31:14.079 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:14.079 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:14.079 lat (usec) : 500=4.00%, 750=36.82%, 1000=14.18% 00:31:14.079 lat (msec) : 2=45.00% 00:31:14.079 cpu : usr=2.10%, sys=2.30%, ctx=1100, majf=0, minf=1 00:31:14.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.080 issued rwts: total=512,588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:14.080 00:31:14.080 Run status group 0 (all jobs): 00:31:14.080 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:31:14.080 WRITE: bw=2350KiB/s (2406kB/s), 2350KiB/s-2350KiB/s (2406kB/s-2406kB/s), io=2352KiB (2408kB), run=1001-1001msec 00:31:14.080 00:31:14.080 Disk stats (read/write): 00:31:14.080 nvme0n1: ios=521/512, merge=0/0, ticks=919/329, in_queue=1248, util=98.90% 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:14.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.080 rmmod nvme_tcp 00:31:14.080 rmmod nvme_fabrics 00:31:14.080 rmmod nvme_keyring 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2304102 ']' 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2304102 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 2304102 ']' 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 2304102 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2304102 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2304102' 00:31:14.080 killing process with pid 2304102 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 2304102 00:31:14.080 11:39:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 2304102 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.080 11:39:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.624 11:39:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:16.625 00:31:16.625 real 0m17.526s 00:31:16.625 user 0m52.057s 00:31:16.625 sys 0m6.124s 00:31:16.625 11:39:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:16.625 11:39:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.625 ************************************ 00:31:16.625 END TEST nvmf_nmic 00:31:16.625 ************************************ 00:31:16.625 11:39:45 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:31:16.625 11:39:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:16.625 11:39:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:16.625 11:39:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.625 ************************************ 00:31:16.625 START TEST nvmf_fio_target 00:31:16.625 ************************************ 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:31:16.625 * Looking for test storage... 00:31:16.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:16.625 11:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:23.239 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:23.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:23.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:23.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:23.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.240 11:39:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.240 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.240 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.240 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:23.240 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:23.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:31:23.502 00:31:23.502 --- 10.0.0.2 ping statistics --- 00:31:23.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.502 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:31:23.502 00:31:23.502 --- 10.0.0.1 ping statistics --- 00:31:23.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.502 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2309967 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2309967 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 2309967 ']' 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:23.502 11:39:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.502 [2024-06-10 11:39:52.383797] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:31:23.502 [2024-06-10 11:39:52.383865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.502 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.502 [2024-06-10 11:39:52.454704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.761 [2024-06-10 11:39:52.531392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.761 [2024-06-10 11:39:52.531432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.761 [2024-06-10 11:39:52.531440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.761 [2024-06-10 11:39:52.531450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.761 [2024-06-10 11:39:52.531456] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.761 [2024-06-10 11:39:52.531566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.761 [2024-06-10 11:39:52.531708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.761 [2024-06-10 11:39:52.531812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.761 [2024-06-10 11:39:52.531813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.332 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:24.592 [2024-06-10 11:39:53.484165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.592 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:24.852 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:24.852 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.113 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:25.113 11:39:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.373 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:25.373 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.634 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:25.634 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:25.896 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:25.896 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:25.896 11:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.156 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:26.156 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:26.417 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:26.417 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:26.678 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:26.938 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:26.938 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.199 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:27.199 11:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:27.199 11:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.460 [2024-06-10 11:39:56.347713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.460 11:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:27.721 11:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:27.981 11:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:29.365 11:39:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:29.365 11:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:31:29.366 11:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:31:29.366 11:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:31:29.366 11:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:31:29.366 11:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:31:31.910 11:40:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:31.910 [global] 00:31:31.910 thread=1 00:31:31.910 invalidate=1 00:31:31.910 rw=write 00:31:31.910 time_based=1 00:31:31.910 runtime=1 00:31:31.910 ioengine=libaio 00:31:31.910 direct=1 00:31:31.910 bs=4096 00:31:31.910 iodepth=1 00:31:31.910 norandommap=0 00:31:31.910 numjobs=1 00:31:31.910 00:31:31.910 verify_dump=1 00:31:31.910 verify_backlog=512 00:31:31.910 verify_state_save=0 00:31:31.910 do_verify=1 00:31:31.910 verify=crc32c-intel 00:31:31.910 [job0] 00:31:31.910 filename=/dev/nvme0n1 00:31:31.910 [job1] 00:31:31.910 filename=/dev/nvme0n2 00:31:31.910 [job2] 00:31:31.910 filename=/dev/nvme0n3 00:31:31.910 [job3] 00:31:31.910 filename=/dev/nvme0n4 00:31:31.910 Could not set queue depth (nvme0n1) 00:31:31.910 Could not set queue depth (nvme0n2) 00:31:31.910 Could not set queue depth (nvme0n3) 00:31:31.910 Could not set queue depth (nvme0n4) 00:31:31.910 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.910 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.910 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.910 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:31.910 fio-3.35 00:31:31.910 Starting 4 threads 00:31:33.319 00:31:33.319 job0: (groupid=0, jobs=1): err= 0: pid=2311797: Mon Jun 10 11:40:01 2024 00:31:33.319 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:33.319 slat (nsec): min=7410, max=42117, avg=24746.39, stdev=2450.04 00:31:33.319 clat (usec): min=705, max=1186, avg=1019.09, stdev=78.37 00:31:33.319 lat (usec): min=730, max=1227, avg=1043.83, stdev=78.53 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 963], 00:31:33.319 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1037], 60.00th=[ 1057], 00:31:33.319 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1123], 00:31:33.319 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:31:33.319 | 99.99th=[ 1188] 00:31:33.319 write: IOPS=669, BW=2677KiB/s (2742kB/s)(2680KiB/1001msec); 0 zone resets 00:31:33.319 slat (nsec): min=9597, max=51907, avg=29293.57, stdev=8858.08 00:31:33.319 clat (usec): min=306, max=1067, avg=648.38, stdev=137.58 00:31:33.319 lat (usec): min=324, max=1099, avg=677.68, stdev=140.13 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 330], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 519], 00:31:33.319 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 693], 00:31:33.319 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 824], 95.00th=[ 857], 00:31:33.319 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1074], 99.95th=[ 1074], 00:31:33.319 | 99.99th=[ 1074] 00:31:33.319 bw ( KiB/s): min= 4096, max= 4096, per=48.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.319 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.319 lat (usec) : 500=9.48%, 750=33.76%, 1000=29.10% 00:31:33.319 lat (msec) : 2=27.66% 00:31:33.319 cpu : usr=1.80%, sys=3.30%, ctx=1183, majf=0, minf=1 00:31:33.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.319 issued rwts: total=512,670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.319 job1: (groupid=0, jobs=1): err= 0: pid=2311803: Mon Jun 10 11:40:01 2024 00:31:33.319 read: IOPS=16, BW=65.7KiB/s (67.3kB/s)(68.0KiB/1035msec) 00:31:33.319 slat (nsec): min=25062, max=25733, avg=25294.00, stdev=167.89 00:31:33.319 clat (usec): min=997, max=42961, avg=39604.10, stdev=9951.95 00:31:33.319 lat (usec): min=1022, max=42986, avg=39629.39, stdev=9951.92 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 996], 5.00th=[ 996], 10.00th=[41681], 20.00th=[41681], 00:31:33.319 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:33.319 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:33.319 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:33.319 | 99.99th=[42730] 00:31:33.319 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:31:33.319 slat (usec): min=9, max=2753, avg=34.92, stdev=120.85 00:31:33.319 clat (usec): min=128, max=1402, avg=658.47, stdev=156.30 00:31:33.319 lat (usec): min=139, max=3307, avg=693.39, stdev=197.36 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 293], 5.00th=[ 367], 10.00th=[ 449], 20.00th=[ 537], 00:31:33.319 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 709], 00:31:33.319 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 898], 00:31:33.319 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1401], 99.95th=[ 1401], 00:31:33.319 | 99.99th=[ 1401] 00:31:33.319 bw ( KiB/s): min= 4096, max= 4096, per=48.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.319 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.319 lat (usec) : 250=0.57%, 500=15.88%, 750=52.74%, 1000=27.22% 00:31:33.319 lat (msec) : 2=0.57%, 50=3.02% 00:31:33.319 cpu : usr=0.97%, sys=1.16%, ctx=534, majf=0, minf=1 00:31:33.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.319 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.319 job2: (groupid=0, jobs=1): err= 0: pid=2311816: Mon Jun 10 11:40:01 2024 00:31:33.319 read: IOPS=15, BW=63.0KiB/s (64.5kB/s)(64.0KiB/1016msec) 00:31:33.319 slat (nsec): min=26407, max=27705, avg=26729.75, stdev=349.29 00:31:33.319 clat (usec): min=951, max=42073, avg=39395.18, stdev=10251.99 00:31:33.319 lat (usec): min=978, max=42100, avg=39421.91, stdev=10251.88 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41681], 20.00th=[41681], 00:31:33.319 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:31:33.319 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:33.319 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:33.319 | 99.99th=[42206] 00:31:33.319 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:31:33.319 slat (usec): min=9, max=1810, avg=35.93, stdev=79.24 00:31:33.319 clat (usec): min=334, max=1083, avg=706.59, stdev=123.58 00:31:33.319 lat (usec): min=350, max=2688, avg=742.52, stdev=153.07 00:31:33.319 clat percentiles (usec): 00:31:33.319 | 1.00th=[ 416], 5.00th=[ 494], 10.00th=[ 545], 20.00th=[ 603], 00:31:33.319 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:31:33.320 | 70.00th=[ 766], 80.00th=[ 816], 90.00th=[ 865], 95.00th=[ 914], 00:31:33.320 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1090], 99.95th=[ 1090], 00:31:33.320 | 99.99th=[ 1090] 00:31:33.320 bw ( KiB/s): min= 4096, max= 4096, per=48.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.320 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.320 lat (usec) : 500=5.30%, 750=57.39%, 1000=33.71% 00:31:33.320 lat (msec) : 2=0.76%, 50=2.84% 00:31:33.320 cpu : usr=1.48%, sys=1.58%, ctx=532, majf=0, minf=1 00:31:33.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.320 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.320 job3: (groupid=0, jobs=1): err= 0: pid=2311823: Mon Jun 10 11:40:01 2024 00:31:33.320 read: IOPS=211, BW=847KiB/s (867kB/s)(848KiB/1001msec) 00:31:33.320 slat (nsec): min=7854, max=37714, avg=25009.31, stdev=1718.87 00:31:33.320 clat (usec): min=619, max=42995, avg=3182.01, stdev=9180.39 00:31:33.320 lat (usec): min=644, max=43019, avg=3207.02, stdev=9180.38 00:31:33.320 clat percentiles (usec): 00:31:33.320 | 1.00th=[ 676], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 988], 00:31:33.320 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:31:33.320 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[41681], 00:31:33.320 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:33.320 | 99.99th=[43254] 00:31:33.320 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:33.320 slat (nsec): min=9823, max=55871, avg=29814.94, stdev=8950.53 00:31:33.320 clat (usec): min=216, max=1033, avg=580.35, stdev=130.40 00:31:33.320 lat (usec): min=226, max=1067, avg=610.16, stdev=132.78 00:31:33.320 clat percentiles (usec): 00:31:33.320 | 1.00th=[ 285], 5.00th=[ 351], 10.00th=[ 416], 20.00th=[ 457], 00:31:33.320 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:31:33.320 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 783], 00:31:33.320 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 1037], 99.95th=[ 1037], 00:31:33.320 | 99.99th=[ 1037] 00:31:33.320 bw ( KiB/s): min= 4096, max= 4096, per=48.04%, avg=4096.00, stdev= 0.00, samples=1 00:31:33.320 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:33.320 lat (usec) : 250=0.28%, 500=18.23%, 750=46.41%, 1000=13.81% 00:31:33.320 lat (msec) : 2=19.75%, 50=1.52% 00:31:33.320 cpu : usr=0.90%, sys=2.20%, ctx=725, majf=0, minf=1 00:31:33.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.320 issued rwts: total=212,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:33.320 00:31:33.320 Run status group 0 (all jobs): 00:31:33.320 READ: bw=2926KiB/s (2996kB/s), 63.0KiB/s-2046KiB/s (64.5kB/s-2095kB/s), io=3028KiB (3101kB), run=1001-1035msec 00:31:33.320 WRITE: bw=8526KiB/s (8730kB/s), 1979KiB/s-2677KiB/s (2026kB/s-2742kB/s), io=8824KiB (9036kB), run=1001-1035msec 00:31:33.320 00:31:33.320 Disk stats (read/write): 00:31:33.320 nvme0n1: ios=489/512, merge=0/0, ticks=504/339, in_queue=843, util=86.67% 00:31:33.320 nvme0n2: ios=52/512, merge=0/0, ticks=599/319, in_queue=918, util=90.81% 00:31:33.320 nvme0n3: ios=72/512, merge=0/0, ticks=599/289, in_queue=888, util=92.81% 00:31:33.320 nvme0n4: ios=121/512, merge=0/0, ticks=723/282, in_queue=1005, util=94.55% 00:31:33.320 11:40:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:33.320 [global] 00:31:33.320 thread=1 00:31:33.320 invalidate=1 00:31:33.320 rw=randwrite 00:31:33.320 time_based=1 00:31:33.320 runtime=1 00:31:33.320 ioengine=libaio 00:31:33.320 direct=1 00:31:33.320 bs=4096 00:31:33.320 iodepth=1 00:31:33.320 norandommap=0 00:31:33.320 numjobs=1 00:31:33.320 00:31:33.320 verify_dump=1 00:31:33.320 verify_backlog=512 00:31:33.320 verify_state_save=0 00:31:33.320 do_verify=1 00:31:33.320 verify=crc32c-intel 00:31:33.320 [job0] 00:31:33.320 filename=/dev/nvme0n1 00:31:33.320 [job1] 00:31:33.320 filename=/dev/nvme0n2 00:31:33.320 [job2] 00:31:33.320 filename=/dev/nvme0n3 00:31:33.320 [job3] 00:31:33.320 filename=/dev/nvme0n4 00:31:33.320 Could not set queue depth (nvme0n1) 00:31:33.320 Could not set queue depth (nvme0n2) 00:31:33.320 Could not set queue depth (nvme0n3) 00:31:33.320 Could not set queue depth (nvme0n4) 00:31:33.587 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.587 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.587 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.587 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:33.587 fio-3.35 00:31:33.587 Starting 4 threads 00:31:34.998 00:31:34.998 job0: (groupid=0, jobs=1): err= 0: pid=2312322: Mon Jun 10 11:40:03 2024 00:31:34.998 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:34.998 slat (nsec): min=6573, max=53966, avg=26238.00, stdev=3273.48 00:31:34.998 clat (usec): min=771, max=1319, avg=1123.10, stdev=70.96 00:31:34.998 lat (usec): min=798, max=1345, avg=1149.34, stdev=70.88 00:31:34.998 clat percentiles (usec): 00:31:34.998 | 1.00th=[ 906], 5.00th=[ 988], 10.00th=[ 1037], 20.00th=[ 1074], 00:31:34.998 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1139], 00:31:34.998 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1221], 00:31:34.998 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1319], 00:31:34.998 | 99.99th=[ 1319] 00:31:34.998 write: IOPS=588, BW=2354KiB/s (2410kB/s)(2356KiB/1001msec); 0 zone resets 00:31:34.998 slat (nsec): min=8727, max=57822, avg=28919.29, stdev=9667.12 00:31:34.998 clat (usec): min=225, max=1831, avg=654.95, stdev=151.86 00:31:34.998 lat (usec): min=235, max=1840, avg=683.86, stdev=154.42 00:31:34.998 clat percentiles (usec): 00:31:34.998 | 1.00th=[ 334], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 537], 00:31:34.998 | 30.00th=[ 570], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 693], 00:31:34.998 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 816], 95.00th=[ 857], 00:31:34.998 | 99.00th=[ 938], 99.50th=[ 1565], 99.90th=[ 1827], 99.95th=[ 1827], 00:31:34.998 | 99.99th=[ 1827] 00:31:34.998 bw ( KiB/s): min= 4096, max= 4096, per=41.43%, avg=4096.00, stdev= 0.00, samples=1 00:31:34.998 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:34.998 lat (usec) : 250=0.09%, 500=7.27%, 750=31.97%, 1000=16.44% 00:31:34.998 lat (msec) : 2=44.23% 00:31:34.998 cpu : usr=2.50%, sys=3.90%, ctx=1103, majf=0, minf=1 00:31:34.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.998 issued rwts: total=512,589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.998 job1: (groupid=0, jobs=1): err= 0: pid=2312328: Mon Jun 10 11:40:03 2024 00:31:34.998 read: IOPS=16, BW=66.0KiB/s (67.6kB/s)(68.0KiB/1030msec) 00:31:34.998 slat (nsec): min=9557, max=30732, avg=25215.29, stdev=4205.82 00:31:34.998 clat (usec): min=1063, max=42981, avg=39683.68, stdev=9957.98 00:31:34.998 lat (usec): min=1072, max=43007, avg=39708.90, stdev=9962.01 00:31:34.998 clat percentiles (usec): 00:31:34.998 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41681], 20.00th=[41681], 00:31:34.998 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:34.998 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:34.998 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:34.998 | 99.99th=[42730] 00:31:34.998 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:31:34.998 slat (nsec): min=8989, max=64561, avg=28021.65, stdev=10905.46 00:31:34.998 clat (usec): min=301, max=2130, avg=657.53, stdev=154.44 00:31:34.998 lat (usec): min=322, max=2170, avg=685.55, stdev=159.06 00:31:34.998 clat percentiles (usec): 00:31:34.998 | 1.00th=[ 355], 5.00th=[ 420], 10.00th=[ 441], 20.00th=[ 529], 00:31:34.998 | 30.00th=[ 586], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 701], 00:31:34.998 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 865], 00:31:34.998 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 2147], 99.95th=[ 2147], 00:31:34.998 | 99.99th=[ 2147] 00:31:34.998 bw ( KiB/s): min= 4096, max= 4096, per=41.43%, avg=4096.00, stdev= 0.00, samples=1 00:31:34.998 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:34.998 lat (usec) : 500=15.88%, 750=53.69%, 1000=26.84% 00:31:34.998 lat (msec) : 2=0.38%, 4=0.19%, 50=3.02% 00:31:34.998 cpu : usr=0.97%, sys=1.85%, ctx=530, majf=0, minf=1 00:31:34.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.999 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.999 job2: (groupid=0, jobs=1): err= 0: pid=2312335: Mon Jun 10 11:40:03 2024 00:31:34.999 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:31:34.999 slat (nsec): min=6340, max=60672, avg=26190.68, stdev=3669.72 00:31:34.999 clat (usec): min=433, max=1172, avg=928.02, stdev=94.19 00:31:34.999 lat (usec): min=477, max=1198, avg=954.21, stdev=94.61 00:31:34.999 clat percentiles (usec): 00:31:34.999 | 1.00th=[ 635], 5.00th=[ 709], 10.00th=[ 807], 20.00th=[ 889], 00:31:34.999 | 30.00th=[ 914], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 963], 00:31:34.999 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1012], 95.00th=[ 1037], 00:31:34.999 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1172], 00:31:34.999 | 99.99th=[ 1172] 00:31:34.999 write: IOPS=932, BW=3728KiB/s (3818kB/s)(3732KiB/1001msec); 0 zone resets 00:31:34.999 slat (nsec): min=8607, max=66405, avg=27880.05, stdev=9756.01 00:31:34.999 clat (usec): min=153, max=1993, avg=508.74, stdev=171.19 00:31:34.999 lat (usec): min=162, max=2008, avg=536.62, stdev=173.84 00:31:34.999 clat percentiles (usec): 00:31:34.999 | 1.00th=[ 169], 5.00th=[ 277], 10.00th=[ 314], 20.00th=[ 388], 00:31:34.999 | 30.00th=[ 420], 40.00th=[ 465], 50.00th=[ 515], 60.00th=[ 545], 00:31:34.999 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 701], 00:31:34.999 | 99.00th=[ 840], 99.50th=[ 1549], 99.90th=[ 1991], 99.95th=[ 1991], 00:31:34.999 | 99.99th=[ 1991] 00:31:34.999 bw ( KiB/s): min= 4096, max= 4096, per=41.43%, avg=4096.00, stdev= 0.00, samples=1 00:31:34.999 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:34.999 lat (usec) : 250=1.94%, 500=27.75%, 750=36.19%, 1000=27.96% 00:31:34.999 lat (msec) : 2=6.16% 00:31:34.999 cpu : usr=2.90%, sys=5.30%, ctx=1445, majf=0, minf=1 00:31:34.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.999 issued rwts: total=512,933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.999 job3: (groupid=0, jobs=1): err= 0: pid=2312342: Mon Jun 10 11:40:03 2024 00:31:34.999 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:31:34.999 slat (nsec): min=24220, max=26137, avg=24694.59, stdev=420.51 00:31:34.999 clat (usec): min=1262, max=42655, avg=39611.72, stdev=9883.89 00:31:34.999 lat (usec): min=1287, max=42680, avg=39636.41, stdev=9883.85 00:31:34.999 clat percentiles (usec): 00:31:34.999 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41681], 20.00th=[41681], 00:31:34.999 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:34.999 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:34.999 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:34.999 | 99.99th=[42730] 00:31:34.999 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:31:34.999 slat (nsec): min=8959, max=63943, avg=25257.56, stdev=9967.49 00:31:34.999 clat (usec): min=209, max=1806, avg=622.08, stdev=204.52 00:31:34.999 lat (usec): min=237, max=1853, avg=647.34, stdev=209.91 00:31:34.999 clat percentiles (usec): 00:31:34.999 | 1.00th=[ 231], 5.00th=[ 273], 10.00th=[ 334], 20.00th=[ 465], 00:31:34.999 | 30.00th=[ 515], 40.00th=[ 570], 50.00th=[ 644], 60.00th=[ 701], 00:31:34.999 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 889], 00:31:34.999 | 99.00th=[ 1012], 99.50th=[ 1598], 99.90th=[ 1811], 99.95th=[ 1811], 00:31:34.999 | 99.99th=[ 1811] 00:31:34.999 bw ( KiB/s): min= 4104, max= 4104, per=41.51%, avg=4104.00, stdev= 0.00, samples=1 00:31:34.999 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:31:34.999 lat (usec) : 250=2.46%, 500=23.44%, 750=44.80%, 1000=24.95% 00:31:34.999 lat (msec) : 2=1.32%, 50=3.02% 00:31:34.999 cpu : usr=0.89%, sys=1.09%, ctx=529, majf=0, minf=1 00:31:34.999 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.999 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.999 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:34.999 00:31:34.999 Run status group 0 (all jobs): 00:31:34.999 READ: bw=4109KiB/s (4207kB/s), 66.0KiB/s-2046KiB/s (67.6kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1030msec 00:31:34.999 WRITE: bw=9887KiB/s (10.1MB/s), 1988KiB/s-3728KiB/s (2036kB/s-3818kB/s), io=9.95MiB (10.4MB), run=1001-1030msec 00:31:34.999 00:31:34.999 Disk stats (read/write): 00:31:34.999 nvme0n1: ios=455/512, merge=0/0, ticks=1496/282, in_queue=1778, util=98.50% 00:31:34.999 nvme0n2: ios=34/512, merge=0/0, ticks=1351/280, in_queue=1631, util=90.02% 00:31:34.999 nvme0n3: ios=568/645, merge=0/0, ticks=514/253, in_queue=767, util=92.11% 00:31:34.999 nvme0n4: ios=69/512, merge=0/0, ticks=578/289, in_queue=867, util=95.42% 00:31:34.999 11:40:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:34.999 [global] 00:31:34.999 thread=1 00:31:34.999 invalidate=1 00:31:34.999 rw=write 00:31:34.999 time_based=1 00:31:34.999 runtime=1 00:31:34.999 ioengine=libaio 00:31:34.999 direct=1 00:31:34.999 bs=4096 00:31:34.999 iodepth=128 00:31:34.999 norandommap=0 00:31:34.999 numjobs=1 00:31:34.999 00:31:34.999 verify_dump=1 00:31:34.999 verify_backlog=512 00:31:34.999 verify_state_save=0 00:31:34.999 do_verify=1 00:31:34.999 verify=crc32c-intel 00:31:34.999 [job0] 00:31:34.999 filename=/dev/nvme0n1 00:31:34.999 [job1] 00:31:34.999 filename=/dev/nvme0n2 00:31:34.999 [job2] 00:31:34.999 filename=/dev/nvme0n3 00:31:34.999 [job3] 00:31:34.999 filename=/dev/nvme0n4 00:31:34.999 Could not set queue depth (nvme0n1) 00:31:34.999 Could not set queue depth (nvme0n2) 00:31:34.999 Could not set queue depth (nvme0n3) 00:31:34.999 Could not set queue depth (nvme0n4) 00:31:35.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.266 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.266 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.266 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:35.266 fio-3.35 00:31:35.266 Starting 4 threads 00:31:36.691 00:31:36.691 job0: (groupid=0, jobs=1): err= 0: pid=2312841: Mon Jun 10 11:40:05 2024 00:31:36.691 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:31:36.691 slat (nsec): min=1307, max=5737.7k, avg=78197.54, stdev=482235.91 00:31:36.691 clat (usec): min=5806, max=21788, avg=10233.50, stdev=1968.89 00:31:36.691 lat (usec): min=6365, max=25295, avg=10311.69, stdev=2002.38 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 6652], 5.00th=[ 7767], 10.00th=[ 8356], 20.00th=[ 9110], 00:31:36.691 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:31:36.691 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[13173], 00:31:36.691 | 99.00th=[20055], 99.50th=[20055], 99.90th=[21890], 99.95th=[21890], 00:31:36.691 | 99.99th=[21890] 00:31:36.691 write: IOPS=6516, BW=25.5MiB/s (26.7MB/s)(25.6MiB/1004msec); 0 zone resets 00:31:36.691 slat (usec): min=2, max=13755, avg=74.82, stdev=416.89 00:31:36.691 clat (usec): min=3879, max=17797, avg=9808.48, stdev=1482.10 00:31:36.691 lat (usec): min=4657, max=20214, avg=9883.30, stdev=1515.11 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 8586], 20.00th=[ 9110], 00:31:36.691 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:31:36.691 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[12387], 00:31:36.691 | 99.00th=[14222], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:31:36.691 | 99.99th=[17695] 00:31:36.691 bw ( KiB/s): min=24576, max=26752, per=29.49%, avg=25664.00, stdev=1538.66, samples=2 00:31:36.691 iops : min= 6144, max= 6688, avg=6416.00, stdev=384.67, samples=2 00:31:36.691 lat (msec) : 4=0.01%, 10=60.90%, 20=38.33%, 50=0.76% 00:31:36.691 cpu : usr=5.08%, sys=5.08%, ctx=712, majf=0, minf=1 00:31:36.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:36.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.691 issued rwts: total=6144,6543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.691 job1: (groupid=0, jobs=1): err= 0: pid=2312845: Mon Jun 10 11:40:05 2024 00:31:36.691 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:31:36.691 slat (nsec): min=1288, max=23037k, avg=100382.16, stdev=738593.53 00:31:36.691 clat (usec): min=4795, max=59309, avg=12673.88, stdev=8734.28 00:31:36.691 lat (usec): min=4797, max=59335, avg=12774.26, stdev=8804.75 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7308], 00:31:36.691 | 30.00th=[ 7635], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8848], 00:31:36.691 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19530], 95.00th=[31327], 00:31:36.691 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[56361], 00:31:36.691 | 99.99th=[59507] 00:31:36.691 write: IOPS=4224, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1006msec); 0 zone resets 00:31:36.691 slat (usec): min=2, max=24653, avg=131.40, stdev=960.27 00:31:36.691 clat (usec): min=580, max=67445, avg=17778.08, stdev=13256.32 00:31:36.691 lat (usec): min=588, max=67477, avg=17909.47, stdev=13352.52 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 3261], 5.00th=[ 4686], 10.00th=[ 6325], 20.00th=[ 7242], 00:31:36.691 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[13566], 60.00th=[18220], 00:31:36.691 | 70.00th=[21627], 80.00th=[28705], 90.00th=[39060], 95.00th=[46924], 00:31:36.691 | 99.00th=[56361], 99.50th=[56886], 99.90th=[56886], 99.95th=[57934], 00:31:36.691 | 99.99th=[67634] 00:31:36.691 bw ( KiB/s): min= 8408, max=24576, per=18.95%, avg=16492.00, stdev=11432.50, samples=2 00:31:36.691 iops : min= 2102, max= 6144, avg=4123.00, stdev=2858.13, samples=2 00:31:36.691 lat (usec) : 750=0.06% 00:31:36.691 lat (msec) : 2=0.08%, 4=1.32%, 10=52.17%, 20=24.38%, 50=19.30% 00:31:36.691 lat (msec) : 100=2.68% 00:31:36.691 cpu : usr=2.69%, sys=4.48%, ctx=429, majf=0, minf=2 00:31:36.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:36.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.691 issued rwts: total=4096,4250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.691 job2: (groupid=0, jobs=1): err= 0: pid=2312852: Mon Jun 10 11:40:05 2024 00:31:36.691 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:31:36.691 slat (nsec): min=1421, max=12603k, avg=120717.48, stdev=867574.64 00:31:36.691 clat (usec): min=5406, max=32380, avg=14614.30, stdev=4309.28 00:31:36.691 lat (usec): min=5412, max=32383, avg=14735.02, stdev=4367.35 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 6587], 5.00th=[10421], 10.00th=[10945], 20.00th=[11469], 00:31:36.691 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13435], 60.00th=[14091], 00:31:36.691 | 70.00th=[15795], 80.00th=[17433], 90.00th=[20317], 95.00th=[23987], 00:31:36.691 | 99.00th=[28967], 99.50th=[30540], 99.90th=[32375], 99.95th=[32375], 00:31:36.691 | 99.99th=[32375] 00:31:36.691 write: IOPS=4381, BW=17.1MiB/s (17.9MB/s)(17.3MiB/1011msec); 0 zone resets 00:31:36.691 slat (usec): min=2, max=16585, avg=108.99, stdev=591.46 00:31:36.691 clat (usec): min=1182, max=32375, avg=15399.25, stdev=5821.38 00:31:36.691 lat (usec): min=1194, max=32379, avg=15508.23, stdev=5865.40 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 4228], 5.00th=[ 6849], 10.00th=[ 8586], 20.00th=[10683], 00:31:36.691 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13304], 60.00th=[16909], 00:31:36.691 | 70.00th=[19792], 80.00th=[21890], 90.00th=[23200], 95.00th=[23725], 00:31:36.691 | 99.00th=[27919], 99.50th=[27919], 99.90th=[30278], 99.95th=[31065], 00:31:36.691 | 99.99th=[32375] 00:31:36.691 bw ( KiB/s): min=15240, max=19184, per=19.78%, avg=17212.00, stdev=2788.83, samples=2 00:31:36.691 iops : min= 3810, max= 4796, avg=4303.00, stdev=697.21, samples=2 00:31:36.691 lat (msec) : 2=0.02%, 4=0.36%, 10=9.15%, 20=70.87%, 50=19.60% 00:31:36.691 cpu : usr=2.87%, sys=4.75%, ctx=508, majf=0, minf=1 00:31:36.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:36.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.691 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.691 job3: (groupid=0, jobs=1): err= 0: pid=2312853: Mon Jun 10 11:40:05 2024 00:31:36.691 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:31:36.691 slat (nsec): min=1353, max=13702k, avg=77757.58, stdev=556441.78 00:31:36.691 clat (usec): min=2246, max=31527, avg=10343.40, stdev=3903.92 00:31:36.691 lat (usec): min=2261, max=31533, avg=10421.15, stdev=3935.65 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 4686], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8291], 00:31:36.691 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9110], 00:31:36.691 | 70.00th=[10028], 80.00th=[12649], 90.00th=[13829], 95.00th=[17433], 00:31:36.691 | 99.00th=[27132], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:31:36.691 | 99.99th=[31589] 00:31:36.691 write: IOPS=6756, BW=26.4MiB/s (27.7MB/s)(26.4MiB/1002msec); 0 zone resets 00:31:36.691 slat (usec): min=2, max=9658, avg=59.54, stdev=359.84 00:31:36.691 clat (usec): min=526, max=23996, avg=8595.77, stdev=2234.88 00:31:36.691 lat (usec): min=1496, max=23998, avg=8655.31, stdev=2267.97 00:31:36.691 clat percentiles (usec): 00:31:36.691 | 1.00th=[ 2999], 5.00th=[ 4686], 10.00th=[ 6325], 20.00th=[ 7373], 00:31:36.691 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8160], 60.00th=[ 8356], 00:31:36.692 | 70.00th=[ 8586], 80.00th=[10290], 90.00th=[12518], 95.00th=[13042], 00:31:36.692 | 99.00th=[13435], 99.50th=[13698], 99.90th=[16319], 99.95th=[20055], 00:31:36.692 | 99.99th=[23987] 00:31:36.692 bw ( KiB/s): min=24576, max=24576, per=28.24%, avg=24576.00, stdev= 0.00, samples=1 00:31:36.692 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:31:36.692 lat (usec) : 750=0.01% 00:31:36.692 lat (msec) : 2=0.09%, 4=1.48%, 10=73.24%, 20=23.14%, 50=2.04% 00:31:36.692 cpu : usr=4.50%, sys=6.19%, ctx=602, majf=0, minf=1 00:31:36.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:31:36.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:36.692 issued rwts: total=6656,6770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:36.692 00:31:36.692 Run status group 0 (all jobs): 00:31:36.692 READ: bw=81.1MiB/s (85.0MB/s), 15.8MiB/s-25.9MiB/s (16.6MB/s-27.2MB/s), io=82.0MiB (86.0MB), run=1002-1011msec 00:31:36.692 WRITE: bw=85.0MiB/s (89.1MB/s), 16.5MiB/s-26.4MiB/s (17.3MB/s-27.7MB/s), io=85.9MiB (90.1MB), run=1002-1011msec 00:31:36.692 00:31:36.692 Disk stats (read/write): 00:31:36.692 nvme0n1: ios=5165/5328, merge=0/0, ticks=25871/24033, in_queue=49904, util=86.87% 00:31:36.692 nvme0n2: ios=3633/3855, merge=0/0, ticks=16003/31762, in_queue=47765, util=88.28% 00:31:36.692 nvme0n3: ios=3636/3631, merge=0/0, ticks=51135/51690, in_queue=102825, util=94.94% 00:31:36.692 nvme0n4: ios=5386/5632, merge=0/0, ticks=39057/30726, in_queue=69783, util=94.14% 00:31:36.692 11:40:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:36.692 [global] 00:31:36.692 thread=1 00:31:36.692 invalidate=1 00:31:36.692 rw=randwrite 00:31:36.692 time_based=1 00:31:36.692 runtime=1 00:31:36.692 ioengine=libaio 00:31:36.692 direct=1 00:31:36.692 bs=4096 00:31:36.692 iodepth=128 00:31:36.692 norandommap=0 00:31:36.692 numjobs=1 00:31:36.692 00:31:36.692 verify_dump=1 00:31:36.692 verify_backlog=512 00:31:36.692 verify_state_save=0 00:31:36.692 do_verify=1 00:31:36.692 verify=crc32c-intel 00:31:36.692 [job0] 00:31:36.692 filename=/dev/nvme0n1 00:31:36.692 [job1] 00:31:36.692 filename=/dev/nvme0n2 00:31:36.692 [job2] 00:31:36.692 filename=/dev/nvme0n3 00:31:36.692 [job3] 00:31:36.692 filename=/dev/nvme0n4 00:31:36.692 Could not set queue depth (nvme0n1) 00:31:36.692 Could not set queue depth (nvme0n2) 00:31:36.692 Could not set queue depth (nvme0n3) 00:31:36.692 Could not set queue depth (nvme0n4) 00:31:36.955 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.955 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.955 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.955 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:36.955 fio-3.35 00:31:36.955 Starting 4 threads 00:31:38.370 00:31:38.370 job0: (groupid=0, jobs=1): err= 0: pid=2313349: Mon Jun 10 11:40:06 2024 00:31:38.370 read: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:31:38.370 slat (nsec): min=1287, max=19924k, avg=109565.51, stdev=829868.00 00:31:38.370 clat (usec): min=2124, max=48410, avg=13609.92, stdev=5013.65 00:31:38.370 lat (usec): min=4858, max=48417, avg=13719.48, stdev=5099.69 00:31:38.370 clat percentiles (usec): 00:31:38.370 | 1.00th=[ 5014], 5.00th=[ 7242], 10.00th=[ 8291], 20.00th=[10028], 00:31:38.370 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13173], 60.00th=[13698], 00:31:38.370 | 70.00th=[13960], 80.00th=[16188], 90.00th=[20317], 95.00th=[23200], 00:31:38.370 | 99.00th=[31851], 99.50th=[35390], 99.90th=[43779], 99.95th=[48497], 00:31:38.370 | 99.99th=[48497] 00:31:38.370 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:31:38.370 slat (usec): min=2, max=14270, avg=127.63, stdev=691.74 00:31:38.370 clat (usec): min=2003, max=57219, avg=17453.33, stdev=14508.84 00:31:38.370 lat (usec): min=2409, max=57226, avg=17580.96, stdev=14607.84 00:31:38.370 clat percentiles (usec): 00:31:38.370 | 1.00th=[ 3982], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7635], 00:31:38.370 | 30.00th=[ 8225], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[11600], 00:31:38.370 | 70.00th=[18744], 80.00th=[25560], 90.00th=[45876], 95.00th=[51643], 00:31:38.370 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:31:38.370 | 99.99th=[57410] 00:31:38.370 bw ( KiB/s): min=16384, max=16384, per=19.96%, avg=16384.00, stdev= 0.00, samples=2 00:31:38.370 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:31:38.371 lat (msec) : 4=0.55%, 10=30.52%, 20=49.66%, 50=16.22%, 100=3.04% 00:31:38.371 cpu : usr=3.78%, sys=3.59%, ctx=331, majf=0, minf=1 00:31:38.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.371 issued rwts: total=4089,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.371 job1: (groupid=0, jobs=1): err= 0: pid=2313350: Mon Jun 10 11:40:06 2024 00:31:38.371 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:31:38.371 slat (nsec): min=1322, max=10330k, avg=76736.20, stdev=568525.01 00:31:38.371 clat (usec): min=2946, max=27182, avg=10419.80, stdev=4189.42 00:31:38.371 lat (usec): min=2951, max=27738, avg=10496.54, stdev=4228.54 00:31:38.371 clat percentiles (usec): 00:31:38.371 | 1.00th=[ 5145], 5.00th=[ 5735], 10.00th=[ 6063], 20.00th=[ 6849], 00:31:38.371 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10159], 00:31:38.371 | 70.00th=[11731], 80.00th=[13829], 90.00th=[16319], 95.00th=[19268], 00:31:38.371 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:31:38.371 | 99.99th=[27132] 00:31:38.371 write: IOPS=6587, BW=25.7MiB/s (27.0MB/s)(25.8MiB/1004msec); 0 zone resets 00:31:38.371 slat (usec): min=2, max=9758, avg=74.86, stdev=472.68 00:31:38.371 clat (usec): min=1743, max=36573, avg=9581.13, stdev=6098.17 00:31:38.371 lat (usec): min=2228, max=36583, avg=9655.98, stdev=6133.20 00:31:38.371 clat percentiles (usec): 00:31:38.371 | 1.00th=[ 3654], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5669], 00:31:38.371 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 8225], 00:31:38.371 | 70.00th=[ 9634], 80.00th=[11600], 90.00th=[17957], 95.00th=[22414], 00:31:38.371 | 99.00th=[33162], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:31:38.371 | 99.99th=[36439] 00:31:38.371 bw ( KiB/s): min=21872, max=30024, per=31.61%, avg=25948.00, stdev=5764.33, samples=2 00:31:38.371 iops : min= 5468, max= 7506, avg=6487.00, stdev=1441.08, samples=2 00:31:38.371 lat (msec) : 2=0.01%, 4=0.82%, 10=63.92%, 20=29.24%, 50=6.01% 00:31:38.371 cpu : usr=4.89%, sys=6.38%, ctx=402, majf=0, minf=1 00:31:38.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.371 issued rwts: total=6144,6614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.371 job2: (groupid=0, jobs=1): err= 0: pid=2313355: Mon Jun 10 11:40:06 2024 00:31:38.371 read: IOPS=3271, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1005msec) 00:31:38.371 slat (nsec): min=1303, max=29089k, avg=200332.97, stdev=1578691.59 00:31:38.371 clat (usec): min=1765, max=100791, avg=25582.30, stdev=24456.37 00:31:38.371 lat (msec): min=2, max=100, avg=25.78, stdev=24.59 00:31:38.371 clat percentiles (msec): 00:31:38.371 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 12], 00:31:38.371 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 17], 00:31:38.371 | 70.00th=[ 25], 80.00th=[ 36], 90.00th=[ 68], 95.00th=[ 92], 00:31:38.371 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:31:38.371 | 99.99th=[ 102] 00:31:38.371 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:31:38.371 slat (usec): min=2, max=7396, avg=89.98, stdev=524.44 00:31:38.371 clat (usec): min=2048, max=32052, avg=12080.69, stdev=4230.26 00:31:38.371 lat (usec): min=2056, max=32078, avg=12170.67, stdev=4246.08 00:31:38.371 clat percentiles (usec): 00:31:38.371 | 1.00th=[ 2900], 5.00th=[ 5997], 10.00th=[ 6325], 20.00th=[ 8848], 00:31:38.371 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:31:38.371 | 70.00th=[13042], 80.00th=[14222], 90.00th=[17171], 95.00th=[21627], 00:31:38.371 | 99.00th=[23462], 99.50th=[25035], 99.90th=[28443], 99.95th=[28443], 00:31:38.371 | 99.99th=[32113] 00:31:38.371 bw ( KiB/s): min= 8192, max=20480, per=17.46%, avg=14336.00, stdev=8688.93, samples=2 00:31:38.371 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:31:38.371 lat (msec) : 2=0.01%, 4=3.04%, 10=14.26%, 20=62.05%, 50=14.30% 00:31:38.371 lat (msec) : 100=4.98%, 250=1.35% 00:31:38.371 cpu : usr=2.09%, sys=3.69%, ctx=279, majf=0, minf=1 00:31:38.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.371 issued rwts: total=3288,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.371 job3: (groupid=0, jobs=1): err= 0: pid=2313360: Mon Jun 10 11:40:06 2024 00:31:38.371 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:31:38.371 slat (nsec): min=1415, max=11996k, avg=86723.65, stdev=649911.17 00:31:38.371 clat (usec): min=3942, max=34629, avg=11070.42, stdev=3541.73 00:31:38.371 lat (usec): min=3948, max=34632, avg=11157.14, stdev=3588.71 00:31:38.371 clat percentiles (usec): 00:31:38.371 | 1.00th=[ 5735], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8586], 00:31:38.371 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11207], 00:31:38.371 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14877], 95.00th=[16909], 00:31:38.371 | 99.00th=[22676], 99.50th=[32113], 99.90th=[33817], 99.95th=[34866], 00:31:38.371 | 99.99th=[34866] 00:31:38.371 write: IOPS=6356, BW=24.8MiB/s (26.0MB/s)(25.1MiB/1009msec); 0 zone resets 00:31:38.371 slat (usec): min=2, max=10351, avg=66.70, stdev=436.96 00:31:38.371 clat (usec): min=1297, max=34623, avg=9339.94, stdev=3103.25 00:31:38.371 lat (usec): min=1308, max=34625, avg=9406.64, stdev=3122.18 00:31:38.371 clat percentiles (usec): 00:31:38.371 | 1.00th=[ 3425], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6718], 00:31:38.371 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9503], 00:31:38.371 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12387], 95.00th=[13960], 00:31:38.371 | 99.00th=[23462], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:31:38.371 | 99.99th=[34866] 00:31:38.371 bw ( KiB/s): min=22896, max=27400, per=30.63%, avg=25148.00, stdev=3184.81, samples=2 00:31:38.371 iops : min= 5724, max= 6850, avg=6287.00, stdev=796.20, samples=2 00:31:38.371 lat (msec) : 2=0.02%, 4=1.08%, 10=55.29%, 20=42.29%, 50=1.32% 00:31:38.371 cpu : usr=5.06%, sys=6.55%, ctx=569, majf=0, minf=1 00:31:38.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.371 issued rwts: total=6144,6414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.371 00:31:38.371 Run status group 0 (all jobs): 00:31:38.371 READ: bw=76.1MiB/s (79.8MB/s), 12.8MiB/s-23.9MiB/s (13.4MB/s-25.1MB/s), io=76.8MiB (80.5MB), run=1004-1009msec 00:31:38.371 WRITE: bw=80.2MiB/s (84.1MB/s), 13.9MiB/s-25.7MiB/s (14.6MB/s-27.0MB/s), io=80.9MiB (84.8MB), run=1004-1009msec 00:31:38.371 00:31:38.371 Disk stats (read/write): 00:31:38.371 nvme0n1: ios=3122/3303, merge=0/0, ticks=38544/56607, in_queue=95151, util=87.98% 00:31:38.371 nvme0n2: ios=5653/5647, merge=0/0, ticks=55217/46914, in_queue=102131, util=98.88% 00:31:38.371 nvme0n3: ios=3014/3072, merge=0/0, ticks=25272/14986, in_queue=40258, util=92.41% 00:31:38.371 nvme0n4: ios=5028/5120, merge=0/0, ticks=54751/47213, in_queue=101964, util=97.55% 00:31:38.371 11:40:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:38.371 11:40:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2313685 00:31:38.371 11:40:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:38.371 11:40:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:38.371 [global] 00:31:38.371 thread=1 00:31:38.371 invalidate=1 00:31:38.371 rw=read 00:31:38.371 time_based=1 00:31:38.371 runtime=10 00:31:38.371 ioengine=libaio 00:31:38.371 direct=1 00:31:38.371 bs=4096 00:31:38.371 iodepth=1 00:31:38.371 norandommap=1 00:31:38.371 numjobs=1 00:31:38.371 00:31:38.371 [job0] 00:31:38.371 filename=/dev/nvme0n1 00:31:38.371 [job1] 00:31:38.371 filename=/dev/nvme0n2 00:31:38.371 [job2] 00:31:38.371 filename=/dev/nvme0n3 00:31:38.371 [job3] 00:31:38.371 filename=/dev/nvme0n4 00:31:38.371 Could not set queue depth (nvme0n1) 00:31:38.371 Could not set queue depth (nvme0n2) 00:31:38.371 Could not set queue depth (nvme0n3) 00:31:38.371 Could not set queue depth (nvme0n4) 00:31:38.637 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.637 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.637 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.637 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:38.637 fio-3.35 00:31:38.637 Starting 4 threads 00:31:41.182 11:40:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:41.443 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2314240, buflen=4096 00:31:41.443 fio: pid=2313880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:41.443 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:41.758 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9830400, buflen=4096 00:31:41.758 fio: pid=2313879, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:41.758 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.758 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:41.758 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=303104, buflen=4096 00:31:41.758 fio: pid=2313871, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:41.758 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:41.758 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:42.040 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=360448, buflen=4096 00:31:42.040 fio: pid=2313877, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:31:42.040 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:42.040 11:40:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:42.040 00:31:42.040 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2313871: Mon Jun 10 11:40:10 2024 00:31:42.040 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(296KiB/3053msec) 00:31:42.040 slat (usec): min=24, max=13493, avg=264.76, stdev=1631.97 00:31:42.040 clat (usec): min=1023, max=42982, avg=40967.97, stdev=6707.39 00:31:42.040 lat (usec): min=1059, max=54996, avg=41235.98, stdev=6932.23 00:31:42.040 clat percentiles (usec): 00:31:42.040 | 1.00th=[ 1020], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:31:42.040 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:42.040 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:42.040 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:42.040 | 99.99th=[42730] 00:31:42.040 bw ( KiB/s): min= 95, max= 104, per=2.53%, avg=97.40, stdev= 3.71, samples=5 00:31:42.040 iops : min= 23, max= 26, avg=24.20, stdev= 1.10, samples=5 00:31:42.040 lat (msec) : 2=2.67%, 50=96.00% 00:31:42.040 cpu : usr=0.13%, sys=0.00%, ctx=77, majf=0, minf=1 00:31:42.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.040 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.040 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.040 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2313877: Mon Jun 10 11:40:10 2024 00:31:42.040 read: IOPS=27, BW=108KiB/s (111kB/s)(352KiB/3257msec) 00:31:42.040 slat (usec): min=9, max=13719, avg=319.80, stdev=1898.69 00:31:42.040 clat (usec): min=607, max=46095, avg=36667.12, stdev=13565.95 00:31:42.040 lat (usec): min=617, max=56013, avg=36990.26, stdev=13810.74 00:31:42.040 clat percentiles (usec): 00:31:42.040 | 1.00th=[ 611], 5.00th=[ 1045], 10.00th=[ 1221], 20.00th=[41157], 00:31:42.040 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:31:42.040 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:42.040 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:31:42.040 | 99.99th=[45876] 00:31:42.040 bw ( KiB/s): min= 96, max= 176, per=2.84%, avg=109.33, stdev=32.66, samples=6 00:31:42.040 iops : min= 24, max= 44, avg=27.33, stdev= 8.16, samples=6 00:31:42.040 lat (usec) : 750=2.25%, 1000=2.25% 00:31:42.040 lat (msec) : 2=7.87%, 50=86.52% 00:31:42.040 cpu : usr=0.15%, sys=0.00%, ctx=93, majf=0, minf=1 00:31:42.040 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.040 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.040 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.040 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.040 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2313879: Mon Jun 10 11:40:10 2024 00:31:42.040 read: IOPS=843, BW=3372KiB/s (3453kB/s)(9600KiB/2847msec) 00:31:42.040 slat (usec): min=6, max=235, avg=25.99, stdev= 5.51 00:31:42.040 clat (usec): min=530, max=42653, avg=1153.51, stdev=1871.88 00:31:42.040 lat (usec): min=539, max=42679, avg=1179.50, stdev=1873.76 00:31:42.040 clat percentiles (usec): 00:31:42.040 | 1.00th=[ 840], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1029], 00:31:42.040 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:31:42.040 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1172], 00:31:42.040 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[42206], 99.95th=[42206], 00:31:42.040 | 99.99th=[42730] 00:31:42.040 bw ( KiB/s): min= 3608, max= 3672, per=94.78%, avg=3640.00, stdev=25.30, samples=5 00:31:42.040 iops : min= 902, max= 918, avg=910.00, stdev= 6.32, samples=5 00:31:42.040 lat (usec) : 750=0.37%, 1000=11.79% 00:31:42.040 lat (msec) : 2=87.59%, 50=0.21% 00:31:42.040 cpu : usr=1.79%, sys=3.06%, ctx=2402, majf=0, minf=1 00:31:42.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.041 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.041 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.041 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2313880: Mon Jun 10 11:40:10 2024 00:31:42.041 read: IOPS=217, BW=867KiB/s (888kB/s)(2260KiB/2606msec) 00:31:42.041 slat (nsec): min=6400, max=64748, avg=22944.93, stdev=6129.32 00:31:42.041 clat (usec): min=396, max=43012, avg=4580.20, stdev=11837.15 00:31:42.041 lat (usec): min=403, max=43037, avg=4603.14, stdev=11837.72 00:31:42.041 clat percentiles (usec): 00:31:42.041 | 1.00th=[ 482], 5.00th=[ 578], 10.00th=[ 635], 20.00th=[ 693], 00:31:42.041 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[ 832], 60.00th=[ 865], 00:31:42.041 | 70.00th=[ 930], 80.00th=[ 1057], 90.00th=[ 1205], 95.00th=[42206], 00:31:42.041 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:42.041 | 99.99th=[43254] 00:31:42.041 bw ( KiB/s): min= 96, max= 2224, per=18.98%, avg=729.00, stdev=948.72, samples=5 00:31:42.041 iops : min= 24, max= 556, avg=182.20, stdev=237.15, samples=5 00:31:42.041 lat (usec) : 500=1.06%, 750=30.04%, 1000=42.23% 00:31:42.041 lat (msec) : 2=17.31%, 50=9.19% 00:31:42.041 cpu : usr=0.27%, sys=0.54%, ctx=566, majf=0, minf=2 00:31:42.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.041 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.041 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.041 00:31:42.041 Run status group 0 (all jobs): 00:31:42.041 READ: bw=3840KiB/s (3933kB/s), 97.0KiB/s-3372KiB/s (99.3kB/s-3453kB/s), io=12.2MiB (12.8MB), run=2606-3257msec 00:31:42.041 00:31:42.041 Disk stats (read/write): 00:31:42.041 nvme0n1: ios=68/0, merge=0/0, ticks=2780/0, in_queue=2780, util=94.16% 00:31:42.041 nvme0n2: ios=84/0, merge=0/0, ticks=3064/0, in_queue=3064, util=95.28% 00:31:42.041 nvme0n3: ios=2399/0, merge=0/0, ticks=2521/0, in_queue=2521, util=96.36% 00:31:42.041 nvme0n4: ios=544/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.42% 00:31:42.302 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:42.302 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:42.562 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:42.562 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:42.562 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:42.562 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:42.823 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:42.823 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:43.083 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:43.083 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2313685 00:31:43.083 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:43.083 11:40:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:43.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:31:43.083 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:43.349 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:31:43.349 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:43.349 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:43.349 nvmf hotplug test: fio failed as expected 00:31:43.349 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:43.350 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:43.350 rmmod nvme_tcp 00:31:43.611 rmmod nvme_fabrics 00:31:43.611 rmmod nvme_keyring 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2309967 ']' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 2309967 ']' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2309967' 00:31:43.611 killing process with pid 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 2309967 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:43.611 11:40:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.157 11:40:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.157 00:31:46.157 real 0m29.456s 00:31:46.157 user 2m39.054s 00:31:46.157 sys 0m8.719s 00:31:46.157 11:40:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:46.157 11:40:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.157 ************************************ 00:31:46.157 END TEST nvmf_fio_target 00:31:46.157 ************************************ 00:31:46.157 11:40:14 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:46.157 11:40:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:46.157 11:40:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:46.157 11:40:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:46.157 ************************************ 00:31:46.157 START TEST nvmf_bdevio 00:31:46.157 ************************************ 00:31:46.157 11:40:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:31:46.157 * Looking for test storage... 00:31:46.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.157 11:40:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.157 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:46.157 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.158 11:40:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.745 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:52.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:52.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:52.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:52.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.746 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:53.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:31:53.007 00:31:53.007 --- 10.0.0.2 ping statistics --- 00:31:53.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.007 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:31:53.007 00:31:53.007 --- 10.0.0.1 ping statistics --- 00:31:53.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.007 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2319011 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2319011 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 2319011 ']' 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:53.007 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.008 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:53.008 11:40:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:53.008 [2024-06-10 11:40:21.830723] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:31:53.008 [2024-06-10 11:40:21.830787] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.008 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.008 [2024-06-10 11:40:21.918531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.268 [2024-06-10 11:40:22.011728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.268 [2024-06-10 11:40:22.011782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.268 [2024-06-10 11:40:22.011790] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.268 [2024-06-10 11:40:22.011796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.268 [2024-06-10 11:40:22.011802] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.268 [2024-06-10 11:40:22.011965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:31:53.268 [2024-06-10 11:40:22.012134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:31:53.268 [2024-06-10 11:40:22.012300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.268 [2024-06-10 11:40:22.012301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:53.840 [2024-06-10 11:40:22.783355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:53.840 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.100 Malloc0 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.100 [2024-06-10 11:40:22.848358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:54.100 { 00:31:54.100 "params": { 00:31:54.100 "name": "Nvme$subsystem", 00:31:54.100 "trtype": "$TEST_TRANSPORT", 00:31:54.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.100 "adrfam": "ipv4", 00:31:54.100 "trsvcid": "$NVMF_PORT", 00:31:54.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.100 "hdgst": ${hdgst:-false}, 00:31:54.100 "ddgst": ${ddgst:-false} 00:31:54.100 }, 00:31:54.100 "method": "bdev_nvme_attach_controller" 00:31:54.100 } 00:31:54.100 EOF 00:31:54.100 )") 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:31:54.100 11:40:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:54.100 "params": { 00:31:54.100 "name": "Nvme1", 00:31:54.100 "trtype": "tcp", 00:31:54.100 "traddr": "10.0.0.2", 00:31:54.100 "adrfam": "ipv4", 00:31:54.100 "trsvcid": "4420", 00:31:54.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.100 "hdgst": false, 00:31:54.100 "ddgst": false 00:31:54.100 }, 00:31:54.100 "method": "bdev_nvme_attach_controller" 00:31:54.100 }' 00:31:54.100 [2024-06-10 11:40:22.910266] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:31:54.101 [2024-06-10 11:40:22.910340] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2319259 ] 00:31:54.101 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.101 [2024-06-10 11:40:22.977007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.101 [2024-06-10 11:40:23.052548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.101 [2024-06-10 11:40:23.052705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.101 [2024-06-10 11:40:23.052727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.672 I/O targets: 00:31:54.672 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:54.672 00:31:54.672 00:31:54.672 CUnit - A unit testing framework for C - Version 2.1-3 00:31:54.672 http://cunit.sourceforge.net/ 00:31:54.672 00:31:54.672 00:31:54.672 Suite: bdevio tests on: Nvme1n1 00:31:54.672 Test: blockdev write read block ...passed 00:31:54.672 Test: blockdev write zeroes read block ...passed 00:31:54.672 Test: blockdev write zeroes read no split ...passed 00:31:54.672 Test: blockdev write zeroes read split ...passed 00:31:54.672 Test: blockdev write zeroes read split partial ...passed 00:31:54.672 Test: blockdev reset ...[2024-06-10 11:40:23.559562] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:54.672 [2024-06-10 11:40:23.559621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e0560 (9): Bad file descriptor 00:31:54.672 [2024-06-10 11:40:23.618837] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:54.672 passed 00:31:54.933 Test: blockdev write read 8 blocks ...passed 00:31:54.933 Test: blockdev write read size > 128k ...passed 00:31:54.933 Test: blockdev write read invalid size ...passed 00:31:54.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:54.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:54.933 Test: blockdev write read max offset ...passed 00:31:54.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:54.933 Test: blockdev writev readv 8 blocks ...passed 00:31:54.933 Test: blockdev writev readv 30 x 1block ...passed 00:31:54.933 Test: blockdev writev readv block ...passed 00:31:54.933 Test: blockdev writev readv size > 128k ...passed 00:31:55.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:55.193 Test: blockdev comparev and writev ...[2024-06-10 11:40:23.927875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.927901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.927912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.927917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.928431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.928439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.928449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.928454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.928983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.928992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.929002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.929008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.929517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.929526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:23.929536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.193 [2024-06-10 11:40:23.929542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:55.193 passed 00:31:55.193 Test: blockdev nvme passthru rw ...passed 00:31:55.193 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:40:24.014592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.193 [2024-06-10 11:40:24.014603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:24.014893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.193 [2024-06-10 11:40:24.014901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:24.015343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.193 [2024-06-10 11:40:24.015351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:55.193 [2024-06-10 11:40:24.015639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.193 [2024-06-10 11:40:24.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:55.193 passed 00:31:55.193 Test: blockdev nvme admin passthru ...passed 00:31:55.193 Test: blockdev copy ...passed 00:31:55.193 00:31:55.193 Run Summary: Type Total Ran Passed Failed Inactive 00:31:55.193 suites 1 1 n/a 0 0 00:31:55.193 tests 23 23 23 0 0 00:31:55.193 asserts 152 152 152 0 n/a 00:31:55.193 00:31:55.193 Elapsed time = 1.428 seconds 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:55.454 rmmod nvme_tcp 00:31:55.454 rmmod nvme_fabrics 00:31:55.454 rmmod nvme_keyring 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2319011 ']' 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2319011 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 2319011 ']' 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 2319011 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2319011 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2319011' 00:31:55.454 killing process with pid 2319011 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 2319011 00:31:55.454 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 2319011 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.715 11:40:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.629 11:40:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:57.629 00:31:57.629 real 0m11.865s 00:31:57.629 user 0m14.627s 00:31:57.629 sys 0m5.713s 00:31:57.629 11:40:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:57.629 11:40:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:57.629 ************************************ 00:31:57.629 END TEST nvmf_bdevio 00:31:57.629 ************************************ 00:31:57.891 11:40:26 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:57.891 11:40:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:57.891 11:40:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:57.891 11:40:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:57.891 ************************************ 00:31:57.891 START TEST nvmf_auth_target 00:31:57.891 ************************************ 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:57.891 * Looking for test storage... 00:31:57.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.891 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:57.892 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:57.892 11:40:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:57.892 11:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.039 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:06.040 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:06.040 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:06.040 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:06.040 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:06.040 11:40:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:06.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:32:06.040 00:32:06.040 --- 10.0.0.2 ping statistics --- 00:32:06.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.040 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:32:06.040 00:32:06.040 --- 10.0.0.1 ping statistics --- 00:32:06.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.040 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2323650 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2323650 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2323650 ']' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2323804 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=68370f7869a89b9d108fa527216a8a878500df82772bc4d0 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GeB 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 68370f7869a89b9d108fa527216a8a878500df82772bc4d0 0 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 68370f7869a89b9d108fa527216a8a878500df82772bc4d0 0 00:32:06.040 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=68370f7869a89b9d108fa527216a8a878500df82772bc4d0 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GeB 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GeB 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.GeB 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6094b6ae2d0621ca6379e6bc5becfefdf1ce5b2f9091581893fa507311e050b6 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TDj 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6094b6ae2d0621ca6379e6bc5becfefdf1ce5b2f9091581893fa507311e050b6 3 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6094b6ae2d0621ca6379e6bc5becfefdf1ce5b2f9091581893fa507311e050b6 3 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6094b6ae2d0621ca6379e6bc5becfefdf1ce5b2f9091581893fa507311e050b6 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TDj 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TDj 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.TDj 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5a1f3b1f511b1495e734c04f3681f692 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yUm 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5a1f3b1f511b1495e734c04f3681f692 1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5a1f3b1f511b1495e734c04f3681f692 1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5a1f3b1f511b1495e734c04f3681f692 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yUm 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yUm 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.yUm 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=545c39697aa83f962f5eb5a9625e4eb40e7dccdd9af70361 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.syK 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 545c39697aa83f962f5eb5a9625e4eb40e7dccdd9af70361 2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 545c39697aa83f962f5eb5a9625e4eb40e7dccdd9af70361 2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=545c39697aa83f962f5eb5a9625e4eb40e7dccdd9af70361 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.syK 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.syK 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.syK 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=84915c3812ac684e59cb62ceaa98ea2c0eaaacf9beb740e2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oDZ 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 84915c3812ac684e59cb62ceaa98ea2c0eaaacf9beb740e2 2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 84915c3812ac684e59cb62ceaa98ea2c0eaaacf9beb740e2 2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=84915c3812ac684e59cb62ceaa98ea2c0eaaacf9beb740e2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oDZ 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oDZ 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.oDZ 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2593d28a35b4ac4f809f582a10524571 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.s5c 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2593d28a35b4ac4f809f582a10524571 1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2593d28a35b4ac4f809f582a10524571 1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2593d28a35b4ac4f809f582a10524571 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.s5c 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.s5c 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.s5c 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=32a550631ea8af6f5a6b974ebc60c7386a0f50ecce74e1ffe7702cf1f788f176 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AwT 00:32:06.041 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 32a550631ea8af6f5a6b974ebc60c7386a0f50ecce74e1ffe7702cf1f788f176 3 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 32a550631ea8af6f5a6b974ebc60c7386a0f50ecce74e1ffe7702cf1f788f176 3 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=32a550631ea8af6f5a6b974ebc60c7386a0f50ecce74e1ffe7702cf1f788f176 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AwT 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AwT 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.AwT 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2323650 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2323650 ']' 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:06.042 11:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2323804 /var/tmp/host.sock 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2323804 ']' 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:32:06.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:06.303 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GeB 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GeB 00:32:06.565 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GeB 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.TDj ]] 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TDj 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TDj 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TDj 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yUm 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yUm 00:32:06.827 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yUm 00:32:07.106 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.syK ]] 00:32:07.106 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.syK 00:32:07.107 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.107 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.107 11:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.107 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.syK 00:32:07.107 11:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.syK 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oDZ 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oDZ 00:32:07.368 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oDZ 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.s5c ]] 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5c 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5c 00:32:07.629 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s5c 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.AwT 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.AwT 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.AwT 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:07.890 11:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.150 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.412 00:32:08.412 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:08.412 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:08.412 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:08.674 { 00:32:08.674 "cntlid": 1, 00:32:08.674 "qid": 0, 00:32:08.674 "state": "enabled", 00:32:08.674 "listen_address": { 00:32:08.674 "trtype": "TCP", 00:32:08.674 "adrfam": "IPv4", 00:32:08.674 "traddr": "10.0.0.2", 00:32:08.674 "trsvcid": "4420" 00:32:08.674 }, 00:32:08.674 "peer_address": { 00:32:08.674 "trtype": "TCP", 00:32:08.674 "adrfam": "IPv4", 00:32:08.674 "traddr": "10.0.0.1", 00:32:08.674 "trsvcid": "50370" 00:32:08.674 }, 00:32:08.674 "auth": { 00:32:08.674 "state": "completed", 00:32:08.674 "digest": "sha256", 00:32:08.674 "dhgroup": "null" 00:32:08.674 } 00:32:08.674 } 00:32:08.674 ]' 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:08.674 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:08.934 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:08.934 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:08.934 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:08.934 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:08.934 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:09.196 11:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:09.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:09.768 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.028 11:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.289 00:32:10.289 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:10.289 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:10.289 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:10.549 { 00:32:10.549 "cntlid": 3, 00:32:10.549 "qid": 0, 00:32:10.549 "state": "enabled", 00:32:10.549 "listen_address": { 00:32:10.549 "trtype": "TCP", 00:32:10.549 "adrfam": "IPv4", 00:32:10.549 "traddr": "10.0.0.2", 00:32:10.549 "trsvcid": "4420" 00:32:10.549 }, 00:32:10.549 "peer_address": { 00:32:10.549 "trtype": "TCP", 00:32:10.549 "adrfam": "IPv4", 00:32:10.549 "traddr": "10.0.0.1", 00:32:10.549 "trsvcid": "50392" 00:32:10.549 }, 00:32:10.549 "auth": { 00:32:10.549 "state": "completed", 00:32:10.549 "digest": "sha256", 00:32:10.549 "dhgroup": "null" 00:32:10.549 } 00:32:10.549 } 00:32:10.549 ]' 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:10.549 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:10.809 11:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:11.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.752 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.078 00:32:12.079 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:12.079 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:12.079 11:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:12.342 { 00:32:12.342 "cntlid": 5, 00:32:12.342 "qid": 0, 00:32:12.342 "state": "enabled", 00:32:12.342 "listen_address": { 00:32:12.342 "trtype": "TCP", 00:32:12.342 "adrfam": "IPv4", 00:32:12.342 "traddr": "10.0.0.2", 00:32:12.342 "trsvcid": "4420" 00:32:12.342 }, 00:32:12.342 "peer_address": { 00:32:12.342 "trtype": "TCP", 00:32:12.342 "adrfam": "IPv4", 00:32:12.342 "traddr": "10.0.0.1", 00:32:12.342 "trsvcid": "50432" 00:32:12.342 }, 00:32:12.342 "auth": { 00:32:12.342 "state": "completed", 00:32:12.342 "digest": "sha256", 00:32:12.342 "dhgroup": "null" 00:32:12.342 } 00:32:12.342 } 00:32:12.342 ]' 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:12.342 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:12.343 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:12.343 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:12.343 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:12.604 11:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:13.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:13.176 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.436 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.437 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.437 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:13.437 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:13.697 00:32:13.697 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:13.697 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:13.697 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:13.958 { 00:32:13.958 "cntlid": 7, 00:32:13.958 "qid": 0, 00:32:13.958 "state": "enabled", 00:32:13.958 "listen_address": { 00:32:13.958 "trtype": "TCP", 00:32:13.958 "adrfam": "IPv4", 00:32:13.958 "traddr": "10.0.0.2", 00:32:13.958 "trsvcid": "4420" 00:32:13.958 }, 00:32:13.958 "peer_address": { 00:32:13.958 "trtype": "TCP", 00:32:13.958 "adrfam": "IPv4", 00:32:13.958 "traddr": "10.0.0.1", 00:32:13.958 "trsvcid": "50468" 00:32:13.958 }, 00:32:13.958 "auth": { 00:32:13.958 "state": "completed", 00:32:13.958 "digest": "sha256", 00:32:13.958 "dhgroup": "null" 00:32:13.958 } 00:32:13.958 } 00:32:13.958 ]' 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:13.958 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:14.218 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:14.218 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:14.218 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:14.218 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:14.218 11:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:14.479 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:15.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:15.052 11:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.313 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.573 00:32:15.573 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:15.573 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:15.573 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:15.833 { 00:32:15.833 "cntlid": 9, 00:32:15.833 "qid": 0, 00:32:15.833 "state": "enabled", 00:32:15.833 "listen_address": { 00:32:15.833 "trtype": "TCP", 00:32:15.833 "adrfam": "IPv4", 00:32:15.833 "traddr": "10.0.0.2", 00:32:15.833 "trsvcid": "4420" 00:32:15.833 }, 00:32:15.833 "peer_address": { 00:32:15.833 "trtype": "TCP", 00:32:15.833 "adrfam": "IPv4", 00:32:15.833 "traddr": "10.0.0.1", 00:32:15.833 "trsvcid": "50490" 00:32:15.833 }, 00:32:15.833 "auth": { 00:32:15.833 "state": "completed", 00:32:15.833 "digest": "sha256", 00:32:15.833 "dhgroup": "ffdhe2048" 00:32:15.833 } 00:32:15.833 } 00:32:15.833 ]' 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:15.833 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:16.094 11:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:17.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.036 11:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.037 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.037 11:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.297 00:32:17.297 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:17.297 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:17.297 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:17.559 { 00:32:17.559 "cntlid": 11, 00:32:17.559 "qid": 0, 00:32:17.559 "state": "enabled", 00:32:17.559 "listen_address": { 00:32:17.559 "trtype": "TCP", 00:32:17.559 "adrfam": "IPv4", 00:32:17.559 "traddr": "10.0.0.2", 00:32:17.559 "trsvcid": "4420" 00:32:17.559 }, 00:32:17.559 "peer_address": { 00:32:17.559 "trtype": "TCP", 00:32:17.559 "adrfam": "IPv4", 00:32:17.559 "traddr": "10.0.0.1", 00:32:17.559 "trsvcid": "50512" 00:32:17.559 }, 00:32:17.559 "auth": { 00:32:17.559 "state": "completed", 00:32:17.559 "digest": "sha256", 00:32:17.559 "dhgroup": "ffdhe2048" 00:32:17.559 } 00:32:17.559 } 00:32:17.559 ]' 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:17.559 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:17.820 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:17.820 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:17.820 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:17.820 11:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:18.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.764 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.025 00:32:19.025 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:19.025 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:19.025 11:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:19.286 { 00:32:19.286 "cntlid": 13, 00:32:19.286 "qid": 0, 00:32:19.286 "state": "enabled", 00:32:19.286 "listen_address": { 00:32:19.286 "trtype": "TCP", 00:32:19.286 "adrfam": "IPv4", 00:32:19.286 "traddr": "10.0.0.2", 00:32:19.286 "trsvcid": "4420" 00:32:19.286 }, 00:32:19.286 "peer_address": { 00:32:19.286 "trtype": "TCP", 00:32:19.286 "adrfam": "IPv4", 00:32:19.286 "traddr": "10.0.0.1", 00:32:19.286 "trsvcid": "37162" 00:32:19.286 }, 00:32:19.286 "auth": { 00:32:19.286 "state": "completed", 00:32:19.286 "digest": "sha256", 00:32:19.286 "dhgroup": "ffdhe2048" 00:32:19.286 } 00:32:19.286 } 00:32:19.286 ]' 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:19.286 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:19.547 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:19.548 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:19.548 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:19.548 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:19.548 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:19.808 11:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:20.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.380 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:20.641 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:20.901 00:32:20.901 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:20.901 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:20.901 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:21.161 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.161 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:21.161 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:21.162 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.162 11:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.162 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:21.162 { 00:32:21.162 "cntlid": 15, 00:32:21.162 "qid": 0, 00:32:21.162 "state": "enabled", 00:32:21.162 "listen_address": { 00:32:21.162 "trtype": "TCP", 00:32:21.162 "adrfam": "IPv4", 00:32:21.162 "traddr": "10.0.0.2", 00:32:21.162 "trsvcid": "4420" 00:32:21.162 }, 00:32:21.162 "peer_address": { 00:32:21.162 "trtype": "TCP", 00:32:21.162 "adrfam": "IPv4", 00:32:21.162 "traddr": "10.0.0.1", 00:32:21.162 "trsvcid": "37184" 00:32:21.162 }, 00:32:21.162 "auth": { 00:32:21.162 "state": "completed", 00:32:21.162 "digest": "sha256", 00:32:21.162 "dhgroup": "ffdhe2048" 00:32:21.162 } 00:32:21.162 } 00:32:21.162 ]' 00:32:21.162 11:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:21.162 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:21.422 11:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:22.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.364 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.624 00:32:22.624 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:22.624 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:22.624 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:22.885 { 00:32:22.885 "cntlid": 17, 00:32:22.885 "qid": 0, 00:32:22.885 "state": "enabled", 00:32:22.885 "listen_address": { 00:32:22.885 "trtype": "TCP", 00:32:22.885 "adrfam": "IPv4", 00:32:22.885 "traddr": "10.0.0.2", 00:32:22.885 "trsvcid": "4420" 00:32:22.885 }, 00:32:22.885 "peer_address": { 00:32:22.885 "trtype": "TCP", 00:32:22.885 "adrfam": "IPv4", 00:32:22.885 "traddr": "10.0.0.1", 00:32:22.885 "trsvcid": "37212" 00:32:22.885 }, 00:32:22.885 "auth": { 00:32:22.885 "state": "completed", 00:32:22.885 "digest": "sha256", 00:32:22.885 "dhgroup": "ffdhe3072" 00:32:22.885 } 00:32:22.885 } 00:32:22.885 ]' 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:22.885 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:23.145 11:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:23.404 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:23.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:23.993 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:23.994 11:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:24.253 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.254 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.514 00:32:24.514 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:24.514 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:24.514 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:24.775 { 00:32:24.775 "cntlid": 19, 00:32:24.775 "qid": 0, 00:32:24.775 "state": "enabled", 00:32:24.775 "listen_address": { 00:32:24.775 "trtype": "TCP", 00:32:24.775 "adrfam": "IPv4", 00:32:24.775 "traddr": "10.0.0.2", 00:32:24.775 "trsvcid": "4420" 00:32:24.775 }, 00:32:24.775 "peer_address": { 00:32:24.775 "trtype": "TCP", 00:32:24.775 "adrfam": "IPv4", 00:32:24.775 "traddr": "10.0.0.1", 00:32:24.775 "trsvcid": "37244" 00:32:24.775 }, 00:32:24.775 "auth": { 00:32:24.775 "state": "completed", 00:32:24.775 "digest": "sha256", 00:32:24.775 "dhgroup": "ffdhe3072" 00:32:24.775 } 00:32:24.775 } 00:32:24.775 ]' 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:24.775 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:25.035 11:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:25.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.975 11:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.236 00:32:26.236 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:26.236 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:26.236 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:26.496 { 00:32:26.496 "cntlid": 21, 00:32:26.496 "qid": 0, 00:32:26.496 "state": "enabled", 00:32:26.496 "listen_address": { 00:32:26.496 "trtype": "TCP", 00:32:26.496 "adrfam": "IPv4", 00:32:26.496 "traddr": "10.0.0.2", 00:32:26.496 "trsvcid": "4420" 00:32:26.496 }, 00:32:26.496 "peer_address": { 00:32:26.496 "trtype": "TCP", 00:32:26.496 "adrfam": "IPv4", 00:32:26.496 "traddr": "10.0.0.1", 00:32:26.496 "trsvcid": "37256" 00:32:26.496 }, 00:32:26.496 "auth": { 00:32:26.496 "state": "completed", 00:32:26.496 "digest": "sha256", 00:32:26.496 "dhgroup": "ffdhe3072" 00:32:26.496 } 00:32:26.496 } 00:32:26.496 ]' 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:26.496 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:26.757 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:26.757 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:26.757 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:26.757 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:26.757 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:27.027 11:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:27.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.597 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:27.857 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:28.117 00:32:28.117 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:28.117 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:28.117 11:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:28.377 { 00:32:28.377 "cntlid": 23, 00:32:28.377 "qid": 0, 00:32:28.377 "state": "enabled", 00:32:28.377 "listen_address": { 00:32:28.377 "trtype": "TCP", 00:32:28.377 "adrfam": "IPv4", 00:32:28.377 "traddr": "10.0.0.2", 00:32:28.377 "trsvcid": "4420" 00:32:28.377 }, 00:32:28.377 "peer_address": { 00:32:28.377 "trtype": "TCP", 00:32:28.377 "adrfam": "IPv4", 00:32:28.377 "traddr": "10.0.0.1", 00:32:28.377 "trsvcid": "37274" 00:32:28.377 }, 00:32:28.377 "auth": { 00:32:28.377 "state": "completed", 00:32:28.377 "digest": "sha256", 00:32:28.377 "dhgroup": "ffdhe3072" 00:32:28.377 } 00:32:28.377 } 00:32:28.377 ]' 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:28.377 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:28.637 11:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:29.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.578 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.838 00:32:29.838 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:29.838 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:29.838 11:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:30.098 { 00:32:30.098 "cntlid": 25, 00:32:30.098 "qid": 0, 00:32:30.098 "state": "enabled", 00:32:30.098 "listen_address": { 00:32:30.098 "trtype": "TCP", 00:32:30.098 "adrfam": "IPv4", 00:32:30.098 "traddr": "10.0.0.2", 00:32:30.098 "trsvcid": "4420" 00:32:30.098 }, 00:32:30.098 "peer_address": { 00:32:30.098 "trtype": "TCP", 00:32:30.098 "adrfam": "IPv4", 00:32:30.098 "traddr": "10.0.0.1", 00:32:30.098 "trsvcid": "50594" 00:32:30.098 }, 00:32:30.098 "auth": { 00:32:30.098 "state": "completed", 00:32:30.098 "digest": "sha256", 00:32:30.098 "dhgroup": "ffdhe4096" 00:32:30.098 } 00:32:30.098 } 00:32:30.098 ]' 00:32:30.098 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:30.359 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:30.360 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:30.621 11:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:31.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.193 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.455 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.716 00:32:31.716 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:31.716 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:31.716 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:31.977 { 00:32:31.977 "cntlid": 27, 00:32:31.977 "qid": 0, 00:32:31.977 "state": "enabled", 00:32:31.977 "listen_address": { 00:32:31.977 "trtype": "TCP", 00:32:31.977 "adrfam": "IPv4", 00:32:31.977 "traddr": "10.0.0.2", 00:32:31.977 "trsvcid": "4420" 00:32:31.977 }, 00:32:31.977 "peer_address": { 00:32:31.977 "trtype": "TCP", 00:32:31.977 "adrfam": "IPv4", 00:32:31.977 "traddr": "10.0.0.1", 00:32:31.977 "trsvcid": "50622" 00:32:31.977 }, 00:32:31.977 "auth": { 00:32:31.977 "state": "completed", 00:32:31.977 "digest": "sha256", 00:32:31.977 "dhgroup": "ffdhe4096" 00:32:31.977 } 00:32:31.977 } 00:32:31.977 ]' 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:31.977 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:32.239 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:32.239 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:32.239 11:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:32.239 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:33.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.183 11:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.183 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.445 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:33.707 { 00:32:33.707 "cntlid": 29, 00:32:33.707 "qid": 0, 00:32:33.707 "state": "enabled", 00:32:33.707 "listen_address": { 00:32:33.707 "trtype": "TCP", 00:32:33.707 "adrfam": "IPv4", 00:32:33.707 "traddr": "10.0.0.2", 00:32:33.707 "trsvcid": "4420" 00:32:33.707 }, 00:32:33.707 "peer_address": { 00:32:33.707 "trtype": "TCP", 00:32:33.707 "adrfam": "IPv4", 00:32:33.707 "traddr": "10.0.0.1", 00:32:33.707 "trsvcid": "50660" 00:32:33.707 }, 00:32:33.707 "auth": { 00:32:33.707 "state": "completed", 00:32:33.707 "digest": "sha256", 00:32:33.707 "dhgroup": "ffdhe4096" 00:32:33.707 } 00:32:33.707 } 00:32:33.707 ]' 00:32:33.707 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:33.968 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:34.229 11:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:34.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:34.801 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:35.062 11:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:35.323 00:32:35.323 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:35.323 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:35.323 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:35.585 { 00:32:35.585 "cntlid": 31, 00:32:35.585 "qid": 0, 00:32:35.585 "state": "enabled", 00:32:35.585 "listen_address": { 00:32:35.585 "trtype": "TCP", 00:32:35.585 "adrfam": "IPv4", 00:32:35.585 "traddr": "10.0.0.2", 00:32:35.585 "trsvcid": "4420" 00:32:35.585 }, 00:32:35.585 "peer_address": { 00:32:35.585 "trtype": "TCP", 00:32:35.585 "adrfam": "IPv4", 00:32:35.585 "traddr": "10.0.0.1", 00:32:35.585 "trsvcid": "50674" 00:32:35.585 }, 00:32:35.585 "auth": { 00:32:35.585 "state": "completed", 00:32:35.585 "digest": "sha256", 00:32:35.585 "dhgroup": "ffdhe4096" 00:32:35.585 } 00:32:35.585 } 00:32:35.585 ]' 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:35.585 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:35.846 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:35.846 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:35.846 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:35.846 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:35.846 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:36.106 11:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:36.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.679 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.940 11:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.201 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:37.461 { 00:32:37.461 "cntlid": 33, 00:32:37.461 "qid": 0, 00:32:37.461 "state": "enabled", 00:32:37.461 "listen_address": { 00:32:37.461 "trtype": "TCP", 00:32:37.461 "adrfam": "IPv4", 00:32:37.461 "traddr": "10.0.0.2", 00:32:37.461 "trsvcid": "4420" 00:32:37.461 }, 00:32:37.461 "peer_address": { 00:32:37.461 "trtype": "TCP", 00:32:37.461 "adrfam": "IPv4", 00:32:37.461 "traddr": "10.0.0.1", 00:32:37.461 "trsvcid": "50710" 00:32:37.461 }, 00:32:37.461 "auth": { 00:32:37.461 "state": "completed", 00:32:37.461 "digest": "sha256", 00:32:37.461 "dhgroup": "ffdhe6144" 00:32:37.461 } 00:32:37.461 } 00:32:37.461 ]' 00:32:37.461 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:37.722 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:37.982 11:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:38.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.635 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:38.898 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:32:38.898 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.899 11:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.159 00:32:39.159 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:39.159 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:39.159 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:39.421 { 00:32:39.421 "cntlid": 35, 00:32:39.421 "qid": 0, 00:32:39.421 "state": "enabled", 00:32:39.421 "listen_address": { 00:32:39.421 "trtype": "TCP", 00:32:39.421 "adrfam": "IPv4", 00:32:39.421 "traddr": "10.0.0.2", 00:32:39.421 "trsvcid": "4420" 00:32:39.421 }, 00:32:39.421 "peer_address": { 00:32:39.421 "trtype": "TCP", 00:32:39.421 "adrfam": "IPv4", 00:32:39.421 "traddr": "10.0.0.1", 00:32:39.421 "trsvcid": "59128" 00:32:39.421 }, 00:32:39.421 "auth": { 00:32:39.421 "state": "completed", 00:32:39.421 "digest": "sha256", 00:32:39.421 "dhgroup": "ffdhe6144" 00:32:39.421 } 00:32:39.421 } 00:32:39.421 ]' 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:39.421 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:39.682 11:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:40.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.625 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.196 00:32:41.196 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:41.196 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:41.196 11:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:41.196 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.196 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:41.196 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:41.196 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:41.457 { 00:32:41.457 "cntlid": 37, 00:32:41.457 "qid": 0, 00:32:41.457 "state": "enabled", 00:32:41.457 "listen_address": { 00:32:41.457 "trtype": "TCP", 00:32:41.457 "adrfam": "IPv4", 00:32:41.457 "traddr": "10.0.0.2", 00:32:41.457 "trsvcid": "4420" 00:32:41.457 }, 00:32:41.457 "peer_address": { 00:32:41.457 "trtype": "TCP", 00:32:41.457 "adrfam": "IPv4", 00:32:41.457 "traddr": "10.0.0.1", 00:32:41.457 "trsvcid": "59158" 00:32:41.457 }, 00:32:41.457 "auth": { 00:32:41.457 "state": "completed", 00:32:41.457 "digest": "sha256", 00:32:41.457 "dhgroup": "ffdhe6144" 00:32:41.457 } 00:32:41.457 } 00:32:41.457 ]' 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:41.457 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:41.717 11:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:42.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.286 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:42.545 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:42.546 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.546 11:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:42.546 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:42.546 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:43.115 00:32:43.115 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:43.115 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:43.115 11:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:43.115 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.115 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:43.115 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:43.115 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:43.377 { 00:32:43.377 "cntlid": 39, 00:32:43.377 "qid": 0, 00:32:43.377 "state": "enabled", 00:32:43.377 "listen_address": { 00:32:43.377 "trtype": "TCP", 00:32:43.377 "adrfam": "IPv4", 00:32:43.377 "traddr": "10.0.0.2", 00:32:43.377 "trsvcid": "4420" 00:32:43.377 }, 00:32:43.377 "peer_address": { 00:32:43.377 "trtype": "TCP", 00:32:43.377 "adrfam": "IPv4", 00:32:43.377 "traddr": "10.0.0.1", 00:32:43.377 "trsvcid": "59198" 00:32:43.377 }, 00:32:43.377 "auth": { 00:32:43.377 "state": "completed", 00:32:43.377 "digest": "sha256", 00:32:43.377 "dhgroup": "ffdhe6144" 00:32:43.377 } 00:32:43.377 } 00:32:43.377 ]' 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:43.377 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:43.640 11:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:44.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.210 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.470 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.041 00:32:45.041 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:45.041 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:45.041 11:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:45.302 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:45.303 { 00:32:45.303 "cntlid": 41, 00:32:45.303 "qid": 0, 00:32:45.303 "state": "enabled", 00:32:45.303 "listen_address": { 00:32:45.303 "trtype": "TCP", 00:32:45.303 "adrfam": "IPv4", 00:32:45.303 "traddr": "10.0.0.2", 00:32:45.303 "trsvcid": "4420" 00:32:45.303 }, 00:32:45.303 "peer_address": { 00:32:45.303 "trtype": "TCP", 00:32:45.303 "adrfam": "IPv4", 00:32:45.303 "traddr": "10.0.0.1", 00:32:45.303 "trsvcid": "59230" 00:32:45.303 }, 00:32:45.303 "auth": { 00:32:45.303 "state": "completed", 00:32:45.303 "digest": "sha256", 00:32:45.303 "dhgroup": "ffdhe8192" 00:32:45.303 } 00:32:45.303 } 00:32:45.303 ]' 00:32:45.303 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:45.303 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:45.303 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:45.303 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:45.303 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:45.564 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:45.564 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:45.564 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:45.564 11:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:46.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.507 11:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.080 00:32:47.080 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:47.080 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:47.080 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:47.341 { 00:32:47.341 "cntlid": 43, 00:32:47.341 "qid": 0, 00:32:47.341 "state": "enabled", 00:32:47.341 "listen_address": { 00:32:47.341 "trtype": "TCP", 00:32:47.341 "adrfam": "IPv4", 00:32:47.341 "traddr": "10.0.0.2", 00:32:47.341 "trsvcid": "4420" 00:32:47.341 }, 00:32:47.341 "peer_address": { 00:32:47.341 "trtype": "TCP", 00:32:47.341 "adrfam": "IPv4", 00:32:47.341 "traddr": "10.0.0.1", 00:32:47.341 "trsvcid": "59246" 00:32:47.341 }, 00:32:47.341 "auth": { 00:32:47.341 "state": "completed", 00:32:47.341 "digest": "sha256", 00:32:47.341 "dhgroup": "ffdhe8192" 00:32:47.341 } 00:32:47.341 } 00:32:47.341 ]' 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:47.341 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:47.602 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:47.602 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:47.602 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:47.602 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:47.602 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:47.864 11:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:48.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:48.436 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:48.696 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.697 11:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.269 00:32:49.269 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:49.269 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:49.269 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:49.530 { 00:32:49.530 "cntlid": 45, 00:32:49.530 "qid": 0, 00:32:49.530 "state": "enabled", 00:32:49.530 "listen_address": { 00:32:49.530 "trtype": "TCP", 00:32:49.530 "adrfam": "IPv4", 00:32:49.530 "traddr": "10.0.0.2", 00:32:49.530 "trsvcid": "4420" 00:32:49.530 }, 00:32:49.530 "peer_address": { 00:32:49.530 "trtype": "TCP", 00:32:49.530 "adrfam": "IPv4", 00:32:49.530 "traddr": "10.0.0.1", 00:32:49.530 "trsvcid": "54414" 00:32:49.530 }, 00:32:49.530 "auth": { 00:32:49.530 "state": "completed", 00:32:49.530 "digest": "sha256", 00:32:49.530 "dhgroup": "ffdhe8192" 00:32:49.530 } 00:32:49.530 } 00:32:49.530 ]' 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:49.530 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:49.792 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:49.792 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:49.792 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:49.792 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:49.792 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:50.053 11:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:50.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.644 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:50.905 11:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:51.477 00:32:51.477 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:51.477 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:51.477 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:51.739 { 00:32:51.739 "cntlid": 47, 00:32:51.739 "qid": 0, 00:32:51.739 "state": "enabled", 00:32:51.739 "listen_address": { 00:32:51.739 "trtype": "TCP", 00:32:51.739 "adrfam": "IPv4", 00:32:51.739 "traddr": "10.0.0.2", 00:32:51.739 "trsvcid": "4420" 00:32:51.739 }, 00:32:51.739 "peer_address": { 00:32:51.739 "trtype": "TCP", 00:32:51.739 "adrfam": "IPv4", 00:32:51.739 "traddr": "10.0.0.1", 00:32:51.739 "trsvcid": "54442" 00:32:51.739 }, 00:32:51.739 "auth": { 00:32:51.739 "state": "completed", 00:32:51.739 "digest": "sha256", 00:32:51.739 "dhgroup": "ffdhe8192" 00:32:51.739 } 00:32:51.739 } 00:32:51.739 ]' 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:51.739 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:51.999 11:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:52.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.944 11:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.206 00:32:53.206 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:53.206 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:53.206 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:53.466 { 00:32:53.466 "cntlid": 49, 00:32:53.466 "qid": 0, 00:32:53.466 "state": "enabled", 00:32:53.466 "listen_address": { 00:32:53.466 "trtype": "TCP", 00:32:53.466 "adrfam": "IPv4", 00:32:53.466 "traddr": "10.0.0.2", 00:32:53.466 "trsvcid": "4420" 00:32:53.466 }, 00:32:53.466 "peer_address": { 00:32:53.466 "trtype": "TCP", 00:32:53.466 "adrfam": "IPv4", 00:32:53.466 "traddr": "10.0.0.1", 00:32:53.466 "trsvcid": "54472" 00:32:53.466 }, 00:32:53.466 "auth": { 00:32:53.466 "state": "completed", 00:32:53.466 "digest": "sha384", 00:32:53.466 "dhgroup": "null" 00:32:53.466 } 00:32:53.466 } 00:32:53.466 ]' 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:53.466 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:53.726 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:53.726 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:53.726 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:53.726 11:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:54.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.670 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:54.930 00:32:54.930 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:54.930 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:54.930 11:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:55.191 { 00:32:55.191 "cntlid": 51, 00:32:55.191 "qid": 0, 00:32:55.191 "state": "enabled", 00:32:55.191 "listen_address": { 00:32:55.191 "trtype": "TCP", 00:32:55.191 "adrfam": "IPv4", 00:32:55.191 "traddr": "10.0.0.2", 00:32:55.191 "trsvcid": "4420" 00:32:55.191 }, 00:32:55.191 "peer_address": { 00:32:55.191 "trtype": "TCP", 00:32:55.191 "adrfam": "IPv4", 00:32:55.191 "traddr": "10.0.0.1", 00:32:55.191 "trsvcid": "54516" 00:32:55.191 }, 00:32:55.191 "auth": { 00:32:55.191 "state": "completed", 00:32:55.191 "digest": "sha384", 00:32:55.191 "dhgroup": "null" 00:32:55.191 } 00:32:55.191 } 00:32:55.191 ]' 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:55.191 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:55.451 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:55.451 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:55.451 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:55.451 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:55.452 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:55.452 11:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:56.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.391 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.652 00:32:56.652 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:56.652 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:56.652 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:56.913 { 00:32:56.913 "cntlid": 53, 00:32:56.913 "qid": 0, 00:32:56.913 "state": "enabled", 00:32:56.913 "listen_address": { 00:32:56.913 "trtype": "TCP", 00:32:56.913 "adrfam": "IPv4", 00:32:56.913 "traddr": "10.0.0.2", 00:32:56.913 "trsvcid": "4420" 00:32:56.913 }, 00:32:56.913 "peer_address": { 00:32:56.913 "trtype": "TCP", 00:32:56.913 "adrfam": "IPv4", 00:32:56.913 "traddr": "10.0.0.1", 00:32:56.913 "trsvcid": "54542" 00:32:56.913 }, 00:32:56.913 "auth": { 00:32:56.913 "state": "completed", 00:32:56.913 "digest": "sha384", 00:32:56.913 "dhgroup": "null" 00:32:56.913 } 00:32:56.913 } 00:32:56.913 ]' 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:56.913 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:57.174 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:57.174 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:57.174 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:57.174 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:57.174 11:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:57.435 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:58.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:58.006 11:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:32:58.266 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:58.267 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:32:58.526 00:32:58.526 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:32:58.527 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:58.527 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:32:58.786 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.786 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:32:58.787 { 00:32:58.787 "cntlid": 55, 00:32:58.787 "qid": 0, 00:32:58.787 "state": "enabled", 00:32:58.787 "listen_address": { 00:32:58.787 "trtype": "TCP", 00:32:58.787 "adrfam": "IPv4", 00:32:58.787 "traddr": "10.0.0.2", 00:32:58.787 "trsvcid": "4420" 00:32:58.787 }, 00:32:58.787 "peer_address": { 00:32:58.787 "trtype": "TCP", 00:32:58.787 "adrfam": "IPv4", 00:32:58.787 "traddr": "10.0.0.1", 00:32:58.787 "trsvcid": "48322" 00:32:58.787 }, 00:32:58.787 "auth": { 00:32:58.787 "state": "completed", 00:32:58.787 "digest": "sha384", 00:32:58.787 "dhgroup": "null" 00:32:58.787 } 00:32:58.787 } 00:32:58.787 ]' 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:58.787 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:59.047 11:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:59.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.988 11:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.248 00:33:00.248 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:00.248 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:00.248 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:00.508 { 00:33:00.508 "cntlid": 57, 00:33:00.508 "qid": 0, 00:33:00.508 "state": "enabled", 00:33:00.508 "listen_address": { 00:33:00.508 "trtype": "TCP", 00:33:00.508 "adrfam": "IPv4", 00:33:00.508 "traddr": "10.0.0.2", 00:33:00.508 "trsvcid": "4420" 00:33:00.508 }, 00:33:00.508 "peer_address": { 00:33:00.508 "trtype": "TCP", 00:33:00.508 "adrfam": "IPv4", 00:33:00.508 "traddr": "10.0.0.1", 00:33:00.508 "trsvcid": "48354" 00:33:00.508 }, 00:33:00.508 "auth": { 00:33:00.508 "state": "completed", 00:33:00.508 "digest": "sha384", 00:33:00.508 "dhgroup": "ffdhe2048" 00:33:00.508 } 00:33:00.508 } 00:33:00.508 ]' 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:00.508 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:00.768 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:00.768 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:00.768 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:00.768 11:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:01.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.710 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.970 00:33:01.970 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:01.970 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:01.970 11:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:02.230 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:02.231 { 00:33:02.231 "cntlid": 59, 00:33:02.231 "qid": 0, 00:33:02.231 "state": "enabled", 00:33:02.231 "listen_address": { 00:33:02.231 "trtype": "TCP", 00:33:02.231 "adrfam": "IPv4", 00:33:02.231 "traddr": "10.0.0.2", 00:33:02.231 "trsvcid": "4420" 00:33:02.231 }, 00:33:02.231 "peer_address": { 00:33:02.231 "trtype": "TCP", 00:33:02.231 "adrfam": "IPv4", 00:33:02.231 "traddr": "10.0.0.1", 00:33:02.231 "trsvcid": "48378" 00:33:02.231 }, 00:33:02.231 "auth": { 00:33:02.231 "state": "completed", 00:33:02.231 "digest": "sha384", 00:33:02.231 "dhgroup": "ffdhe2048" 00:33:02.231 } 00:33:02.231 } 00:33:02.231 ]' 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:02.231 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:02.503 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:02.503 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:02.503 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:02.503 11:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:03.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.507 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.767 00:33:03.767 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:03.767 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:03.767 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:04.027 { 00:33:04.027 "cntlid": 61, 00:33:04.027 "qid": 0, 00:33:04.027 "state": "enabled", 00:33:04.027 "listen_address": { 00:33:04.027 "trtype": "TCP", 00:33:04.027 "adrfam": "IPv4", 00:33:04.027 "traddr": "10.0.0.2", 00:33:04.027 "trsvcid": "4420" 00:33:04.027 }, 00:33:04.027 "peer_address": { 00:33:04.027 "trtype": "TCP", 00:33:04.027 "adrfam": "IPv4", 00:33:04.027 "traddr": "10.0.0.1", 00:33:04.027 "trsvcid": "48408" 00:33:04.027 }, 00:33:04.027 "auth": { 00:33:04.027 "state": "completed", 00:33:04.027 "digest": "sha384", 00:33:04.027 "dhgroup": "ffdhe2048" 00:33:04.027 } 00:33:04.027 } 00:33:04.027 ]' 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:04.027 11:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:04.287 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:05.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.228 11:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:05.228 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:05.488 00:33:05.488 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:05.489 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:05.489 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:05.749 { 00:33:05.749 "cntlid": 63, 00:33:05.749 "qid": 0, 00:33:05.749 "state": "enabled", 00:33:05.749 "listen_address": { 00:33:05.749 "trtype": "TCP", 00:33:05.749 "adrfam": "IPv4", 00:33:05.749 "traddr": "10.0.0.2", 00:33:05.749 "trsvcid": "4420" 00:33:05.749 }, 00:33:05.749 "peer_address": { 00:33:05.749 "trtype": "TCP", 00:33:05.749 "adrfam": "IPv4", 00:33:05.749 "traddr": "10.0.0.1", 00:33:05.749 "trsvcid": "48430" 00:33:05.749 }, 00:33:05.749 "auth": { 00:33:05.749 "state": "completed", 00:33:05.749 "digest": "sha384", 00:33:05.749 "dhgroup": "ffdhe2048" 00:33:05.749 } 00:33:05.749 } 00:33:05.749 ]' 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:05.749 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:06.009 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:06.009 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:06.009 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:06.009 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:06.009 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:06.270 11:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:06.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:06.841 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.102 11:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.363 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:07.363 { 00:33:07.363 "cntlid": 65, 00:33:07.363 "qid": 0, 00:33:07.363 "state": "enabled", 00:33:07.363 "listen_address": { 00:33:07.363 "trtype": "TCP", 00:33:07.363 "adrfam": "IPv4", 00:33:07.363 "traddr": "10.0.0.2", 00:33:07.363 "trsvcid": "4420" 00:33:07.363 }, 00:33:07.363 "peer_address": { 00:33:07.363 "trtype": "TCP", 00:33:07.363 "adrfam": "IPv4", 00:33:07.363 "traddr": "10.0.0.1", 00:33:07.363 "trsvcid": "48454" 00:33:07.363 }, 00:33:07.363 "auth": { 00:33:07.363 "state": "completed", 00:33:07.363 "digest": "sha384", 00:33:07.363 "dhgroup": "ffdhe3072" 00:33:07.363 } 00:33:07.363 } 00:33:07.363 ]' 00:33:07.363 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:07.623 11:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:08.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.567 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.827 00:33:09.088 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:09.088 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:09.088 11:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:09.088 { 00:33:09.088 "cntlid": 67, 00:33:09.088 "qid": 0, 00:33:09.088 "state": "enabled", 00:33:09.088 "listen_address": { 00:33:09.088 "trtype": "TCP", 00:33:09.088 "adrfam": "IPv4", 00:33:09.088 "traddr": "10.0.0.2", 00:33:09.088 "trsvcid": "4420" 00:33:09.088 }, 00:33:09.088 "peer_address": { 00:33:09.088 "trtype": "TCP", 00:33:09.088 "adrfam": "IPv4", 00:33:09.088 "traddr": "10.0.0.1", 00:33:09.088 "trsvcid": "58302" 00:33:09.088 }, 00:33:09.088 "auth": { 00:33:09.088 "state": "completed", 00:33:09.088 "digest": "sha384", 00:33:09.088 "dhgroup": "ffdhe3072" 00:33:09.088 } 00:33:09.088 } 00:33:09.088 ]' 00:33:09.088 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:09.349 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:09.610 11:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:10.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.183 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.443 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.704 00:33:10.704 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:10.704 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:10.704 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:10.965 { 00:33:10.965 "cntlid": 69, 00:33:10.965 "qid": 0, 00:33:10.965 "state": "enabled", 00:33:10.965 "listen_address": { 00:33:10.965 "trtype": "TCP", 00:33:10.965 "adrfam": "IPv4", 00:33:10.965 "traddr": "10.0.0.2", 00:33:10.965 "trsvcid": "4420" 00:33:10.965 }, 00:33:10.965 "peer_address": { 00:33:10.965 "trtype": "TCP", 00:33:10.965 "adrfam": "IPv4", 00:33:10.965 "traddr": "10.0.0.1", 00:33:10.965 "trsvcid": "58322" 00:33:10.965 }, 00:33:10.965 "auth": { 00:33:10.965 "state": "completed", 00:33:10.965 "digest": "sha384", 00:33:10.965 "dhgroup": "ffdhe3072" 00:33:10.965 } 00:33:10.965 } 00:33:10.965 ]' 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:10.965 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:11.226 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:11.226 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:11.226 11:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:11.226 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:12.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:12.168 11:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:12.168 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:12.429 00:33:12.429 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:12.429 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:12.429 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:12.690 { 00:33:12.690 "cntlid": 71, 00:33:12.690 "qid": 0, 00:33:12.690 "state": "enabled", 00:33:12.690 "listen_address": { 00:33:12.690 "trtype": "TCP", 00:33:12.690 "adrfam": "IPv4", 00:33:12.690 "traddr": "10.0.0.2", 00:33:12.690 "trsvcid": "4420" 00:33:12.690 }, 00:33:12.690 "peer_address": { 00:33:12.690 "trtype": "TCP", 00:33:12.690 "adrfam": "IPv4", 00:33:12.690 "traddr": "10.0.0.1", 00:33:12.690 "trsvcid": "58346" 00:33:12.690 }, 00:33:12.690 "auth": { 00:33:12.690 "state": "completed", 00:33:12.690 "digest": "sha384", 00:33:12.690 "dhgroup": "ffdhe3072" 00:33:12.690 } 00:33:12.690 } 00:33:12.690 ]' 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:12.690 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:12.951 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:12.951 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:12.951 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:12.952 11:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:13.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.892 11:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.153 00:33:14.153 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:14.153 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:14.153 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:14.414 { 00:33:14.414 "cntlid": 73, 00:33:14.414 "qid": 0, 00:33:14.414 "state": "enabled", 00:33:14.414 "listen_address": { 00:33:14.414 "trtype": "TCP", 00:33:14.414 "adrfam": "IPv4", 00:33:14.414 "traddr": "10.0.0.2", 00:33:14.414 "trsvcid": "4420" 00:33:14.414 }, 00:33:14.414 "peer_address": { 00:33:14.414 "trtype": "TCP", 00:33:14.414 "adrfam": "IPv4", 00:33:14.414 "traddr": "10.0.0.1", 00:33:14.414 "trsvcid": "58376" 00:33:14.414 }, 00:33:14.414 "auth": { 00:33:14.414 "state": "completed", 00:33:14.414 "digest": "sha384", 00:33:14.414 "dhgroup": "ffdhe4096" 00:33:14.414 } 00:33:14.414 } 00:33:14.414 ]' 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:14.414 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:14.675 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:14.675 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:14.675 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:14.675 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:14.675 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:14.935 11:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:15.506 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:15.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:15.507 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.767 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.027 00:33:16.027 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:16.027 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:16.027 11:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:16.288 { 00:33:16.288 "cntlid": 75, 00:33:16.288 "qid": 0, 00:33:16.288 "state": "enabled", 00:33:16.288 "listen_address": { 00:33:16.288 "trtype": "TCP", 00:33:16.288 "adrfam": "IPv4", 00:33:16.288 "traddr": "10.0.0.2", 00:33:16.288 "trsvcid": "4420" 00:33:16.288 }, 00:33:16.288 "peer_address": { 00:33:16.288 "trtype": "TCP", 00:33:16.288 "adrfam": "IPv4", 00:33:16.288 "traddr": "10.0.0.1", 00:33:16.288 "trsvcid": "58400" 00:33:16.288 }, 00:33:16.288 "auth": { 00:33:16.288 "state": "completed", 00:33:16.288 "digest": "sha384", 00:33:16.288 "dhgroup": "ffdhe4096" 00:33:16.288 } 00:33:16.288 } 00:33:16.288 ]' 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:16.288 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:16.548 11:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:17.490 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:17.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.491 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.062 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:18.062 { 00:33:18.062 "cntlid": 77, 00:33:18.062 "qid": 0, 00:33:18.062 "state": "enabled", 00:33:18.062 "listen_address": { 00:33:18.062 "trtype": "TCP", 00:33:18.062 "adrfam": "IPv4", 00:33:18.062 "traddr": "10.0.0.2", 00:33:18.062 "trsvcid": "4420" 00:33:18.062 }, 00:33:18.062 "peer_address": { 00:33:18.062 "trtype": "TCP", 00:33:18.062 "adrfam": "IPv4", 00:33:18.062 "traddr": "10.0.0.1", 00:33:18.062 "trsvcid": "58432" 00:33:18.062 }, 00:33:18.062 "auth": { 00:33:18.062 "state": "completed", 00:33:18.062 "digest": "sha384", 00:33:18.062 "dhgroup": "ffdhe4096" 00:33:18.062 } 00:33:18.062 } 00:33:18.062 ]' 00:33:18.062 11:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:18.062 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:18.062 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:18.323 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:18.323 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:18.323 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:18.323 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:18.323 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:18.583 11:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:19.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:19.153 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:19.414 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:19.675 00:33:19.675 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:19.675 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:19.675 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:19.936 { 00:33:19.936 "cntlid": 79, 00:33:19.936 "qid": 0, 00:33:19.936 "state": "enabled", 00:33:19.936 "listen_address": { 00:33:19.936 "trtype": "TCP", 00:33:19.936 "adrfam": "IPv4", 00:33:19.936 "traddr": "10.0.0.2", 00:33:19.936 "trsvcid": "4420" 00:33:19.936 }, 00:33:19.936 "peer_address": { 00:33:19.936 "trtype": "TCP", 00:33:19.936 "adrfam": "IPv4", 00:33:19.936 "traddr": "10.0.0.1", 00:33:19.936 "trsvcid": "47558" 00:33:19.936 }, 00:33:19.936 "auth": { 00:33:19.936 "state": "completed", 00:33:19.936 "digest": "sha384", 00:33:19.936 "dhgroup": "ffdhe4096" 00:33:19.936 } 00:33:19.936 } 00:33:19.936 ]' 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:19.936 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:20.197 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:20.197 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:20.197 11:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:20.197 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:21.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:21.139 11:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.139 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.710 00:33:21.710 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:21.710 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:21.710 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:21.971 { 00:33:21.971 "cntlid": 81, 00:33:21.971 "qid": 0, 00:33:21.971 "state": "enabled", 00:33:21.971 "listen_address": { 00:33:21.971 "trtype": "TCP", 00:33:21.971 "adrfam": "IPv4", 00:33:21.971 "traddr": "10.0.0.2", 00:33:21.971 "trsvcid": "4420" 00:33:21.971 }, 00:33:21.971 "peer_address": { 00:33:21.971 "trtype": "TCP", 00:33:21.971 "adrfam": "IPv4", 00:33:21.971 "traddr": "10.0.0.1", 00:33:21.971 "trsvcid": "47584" 00:33:21.971 }, 00:33:21.971 "auth": { 00:33:21.971 "state": "completed", 00:33:21.971 "digest": "sha384", 00:33:21.971 "dhgroup": "ffdhe6144" 00:33:21.971 } 00:33:21.971 } 00:33:21.971 ]' 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:21.971 11:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:22.232 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:22.802 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:22.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:22.802 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:22.802 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.802 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.063 11:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.636 00:33:23.636 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:23.636 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:23.636 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:23.896 { 00:33:23.896 "cntlid": 83, 00:33:23.896 "qid": 0, 00:33:23.896 "state": "enabled", 00:33:23.896 "listen_address": { 00:33:23.896 "trtype": "TCP", 00:33:23.896 "adrfam": "IPv4", 00:33:23.896 "traddr": "10.0.0.2", 00:33:23.896 "trsvcid": "4420" 00:33:23.896 }, 00:33:23.896 "peer_address": { 00:33:23.896 "trtype": "TCP", 00:33:23.896 "adrfam": "IPv4", 00:33:23.896 "traddr": "10.0.0.1", 00:33:23.896 "trsvcid": "47624" 00:33:23.896 }, 00:33:23.896 "auth": { 00:33:23.896 "state": "completed", 00:33:23.896 "digest": "sha384", 00:33:23.896 "dhgroup": "ffdhe6144" 00:33:23.896 } 00:33:23.896 } 00:33:23.896 ]' 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:23.896 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:23.897 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:23.897 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:23.897 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:23.897 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:23.897 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:24.157 11:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:24.727 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:24.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:24.727 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:24.727 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.727 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:24.987 11:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.557 00:33:25.557 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:25.557 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:25.557 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:25.818 { 00:33:25.818 "cntlid": 85, 00:33:25.818 "qid": 0, 00:33:25.818 "state": "enabled", 00:33:25.818 "listen_address": { 00:33:25.818 "trtype": "TCP", 00:33:25.818 "adrfam": "IPv4", 00:33:25.818 "traddr": "10.0.0.2", 00:33:25.818 "trsvcid": "4420" 00:33:25.818 }, 00:33:25.818 "peer_address": { 00:33:25.818 "trtype": "TCP", 00:33:25.818 "adrfam": "IPv4", 00:33:25.818 "traddr": "10.0.0.1", 00:33:25.818 "trsvcid": "47646" 00:33:25.818 }, 00:33:25.818 "auth": { 00:33:25.818 "state": "completed", 00:33:25.818 "digest": "sha384", 00:33:25.818 "dhgroup": "ffdhe6144" 00:33:25.818 } 00:33:25.818 } 00:33:25.818 ]' 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:25.818 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:26.079 11:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:26.649 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:26.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:26.649 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:26.649 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:26.649 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:26.910 11:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:27.511 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:27.511 { 00:33:27.511 "cntlid": 87, 00:33:27.511 "qid": 0, 00:33:27.511 "state": "enabled", 00:33:27.511 "listen_address": { 00:33:27.511 "trtype": "TCP", 00:33:27.511 "adrfam": "IPv4", 00:33:27.511 "traddr": "10.0.0.2", 00:33:27.511 "trsvcid": "4420" 00:33:27.511 }, 00:33:27.511 "peer_address": { 00:33:27.511 "trtype": "TCP", 00:33:27.511 "adrfam": "IPv4", 00:33:27.511 "traddr": "10.0.0.1", 00:33:27.511 "trsvcid": "47666" 00:33:27.511 }, 00:33:27.511 "auth": { 00:33:27.511 "state": "completed", 00:33:27.511 "digest": "sha384", 00:33:27.511 "dhgroup": "ffdhe6144" 00:33:27.511 } 00:33:27.511 } 00:33:27.511 ]' 00:33:27.511 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:27.807 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:27.807 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:27.807 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:27.808 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:27.808 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:27.808 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:27.808 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:28.068 11:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:28.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.639 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:28.640 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:28.640 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:28.901 11:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.472 00:33:29.472 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:29.472 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:29.472 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:29.733 { 00:33:29.733 "cntlid": 89, 00:33:29.733 "qid": 0, 00:33:29.733 "state": "enabled", 00:33:29.733 "listen_address": { 00:33:29.733 "trtype": "TCP", 00:33:29.733 "adrfam": "IPv4", 00:33:29.733 "traddr": "10.0.0.2", 00:33:29.733 "trsvcid": "4420" 00:33:29.733 }, 00:33:29.733 "peer_address": { 00:33:29.733 "trtype": "TCP", 00:33:29.733 "adrfam": "IPv4", 00:33:29.733 "traddr": "10.0.0.1", 00:33:29.733 "trsvcid": "42318" 00:33:29.733 }, 00:33:29.733 "auth": { 00:33:29.733 "state": "completed", 00:33:29.733 "digest": "sha384", 00:33:29.733 "dhgroup": "ffdhe8192" 00:33:29.733 } 00:33:29.733 } 00:33:29.733 ]' 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:29.733 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:29.993 11:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:30.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:30.935 11:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.505 00:33:31.505 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:31.505 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:31.505 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.765 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:31.765 { 00:33:31.765 "cntlid": 91, 00:33:31.765 "qid": 0, 00:33:31.765 "state": "enabled", 00:33:31.765 "listen_address": { 00:33:31.765 "trtype": "TCP", 00:33:31.765 "adrfam": "IPv4", 00:33:31.765 "traddr": "10.0.0.2", 00:33:31.765 "trsvcid": "4420" 00:33:31.765 }, 00:33:31.765 "peer_address": { 00:33:31.765 "trtype": "TCP", 00:33:31.765 "adrfam": "IPv4", 00:33:31.765 "traddr": "10.0.0.1", 00:33:31.765 "trsvcid": "42340" 00:33:31.765 }, 00:33:31.765 "auth": { 00:33:31.765 "state": "completed", 00:33:31.765 "digest": "sha384", 00:33:31.765 "dhgroup": "ffdhe8192" 00:33:31.765 } 00:33:31.765 } 00:33:31.766 ]' 00:33:31.766 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:31.766 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:31.766 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:32.026 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:32.026 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:32.026 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:32.026 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:32.026 11:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:32.286 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:32.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:32.857 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.118 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.119 11:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.689 00:33:33.689 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:33.689 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:33.689 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:33.950 { 00:33:33.950 "cntlid": 93, 00:33:33.950 "qid": 0, 00:33:33.950 "state": "enabled", 00:33:33.950 "listen_address": { 00:33:33.950 "trtype": "TCP", 00:33:33.950 "adrfam": "IPv4", 00:33:33.950 "traddr": "10.0.0.2", 00:33:33.950 "trsvcid": "4420" 00:33:33.950 }, 00:33:33.950 "peer_address": { 00:33:33.950 "trtype": "TCP", 00:33:33.950 "adrfam": "IPv4", 00:33:33.950 "traddr": "10.0.0.1", 00:33:33.950 "trsvcid": "42366" 00:33:33.950 }, 00:33:33.950 "auth": { 00:33:33.950 "state": "completed", 00:33:33.950 "digest": "sha384", 00:33:33.950 "dhgroup": "ffdhe8192" 00:33:33.950 } 00:33:33.950 } 00:33:33.950 ]' 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:33.950 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:34.211 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:34.211 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:34.211 11:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:34.211 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:35.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:35.154 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:35.155 11:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:35.155 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:36.096 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:36.096 { 00:33:36.096 "cntlid": 95, 00:33:36.096 "qid": 0, 00:33:36.096 "state": "enabled", 00:33:36.096 "listen_address": { 00:33:36.096 "trtype": "TCP", 00:33:36.096 "adrfam": "IPv4", 00:33:36.096 "traddr": "10.0.0.2", 00:33:36.096 "trsvcid": "4420" 00:33:36.096 }, 00:33:36.096 "peer_address": { 00:33:36.096 "trtype": "TCP", 00:33:36.096 "adrfam": "IPv4", 00:33:36.096 "traddr": "10.0.0.1", 00:33:36.096 "trsvcid": "42392" 00:33:36.096 }, 00:33:36.096 "auth": { 00:33:36.096 "state": "completed", 00:33:36.096 "digest": "sha384", 00:33:36.096 "dhgroup": "ffdhe8192" 00:33:36.096 } 00:33:36.096 } 00:33:36.096 ]' 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:36.096 11:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:36.096 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:36.096 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:36.357 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:36.357 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:36.357 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:36.357 11:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:37.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.300 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.561 00:33:37.561 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:37.561 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:37.561 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:37.822 { 00:33:37.822 "cntlid": 97, 00:33:37.822 "qid": 0, 00:33:37.822 "state": "enabled", 00:33:37.822 "listen_address": { 00:33:37.822 "trtype": "TCP", 00:33:37.822 "adrfam": "IPv4", 00:33:37.822 "traddr": "10.0.0.2", 00:33:37.822 "trsvcid": "4420" 00:33:37.822 }, 00:33:37.822 "peer_address": { 00:33:37.822 "trtype": "TCP", 00:33:37.822 "adrfam": "IPv4", 00:33:37.822 "traddr": "10.0.0.1", 00:33:37.822 "trsvcid": "42426" 00:33:37.822 }, 00:33:37.822 "auth": { 00:33:37.822 "state": "completed", 00:33:37.822 "digest": "sha512", 00:33:37.822 "dhgroup": "null" 00:33:37.822 } 00:33:37.822 } 00:33:37.822 ]' 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:37.822 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:38.083 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:38.083 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:38.083 11:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:38.083 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:39.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.026 11:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.287 00:33:39.287 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:39.287 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:39.287 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:39.548 { 00:33:39.548 "cntlid": 99, 00:33:39.548 "qid": 0, 00:33:39.548 "state": "enabled", 00:33:39.548 "listen_address": { 00:33:39.548 "trtype": "TCP", 00:33:39.548 "adrfam": "IPv4", 00:33:39.548 "traddr": "10.0.0.2", 00:33:39.548 "trsvcid": "4420" 00:33:39.548 }, 00:33:39.548 "peer_address": { 00:33:39.548 "trtype": "TCP", 00:33:39.548 "adrfam": "IPv4", 00:33:39.548 "traddr": "10.0.0.1", 00:33:39.548 "trsvcid": "33064" 00:33:39.548 }, 00:33:39.548 "auth": { 00:33:39.548 "state": "completed", 00:33:39.548 "digest": "sha512", 00:33:39.548 "dhgroup": "null" 00:33:39.548 } 00:33:39.548 } 00:33:39.548 ]' 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:39.548 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:39.809 11:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:40.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:40.751 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.012 00:33:41.012 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:41.012 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:41.012 11:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:41.272 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:41.272 { 00:33:41.272 "cntlid": 101, 00:33:41.272 "qid": 0, 00:33:41.272 "state": "enabled", 00:33:41.273 "listen_address": { 00:33:41.273 "trtype": "TCP", 00:33:41.273 "adrfam": "IPv4", 00:33:41.273 "traddr": "10.0.0.2", 00:33:41.273 "trsvcid": "4420" 00:33:41.273 }, 00:33:41.273 "peer_address": { 00:33:41.273 "trtype": "TCP", 00:33:41.273 "adrfam": "IPv4", 00:33:41.273 "traddr": "10.0.0.1", 00:33:41.273 "trsvcid": "33088" 00:33:41.273 }, 00:33:41.273 "auth": { 00:33:41.273 "state": "completed", 00:33:41.273 "digest": "sha512", 00:33:41.273 "dhgroup": "null" 00:33:41.273 } 00:33:41.273 } 00:33:41.273 ]' 00:33:41.273 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:41.273 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:41.273 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:41.534 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:41.534 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:41.534 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:41.534 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:41.534 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:41.795 11:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:42.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:42.366 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:42.627 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:42.889 00:33:42.889 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:42.889 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:42.889 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:43.150 { 00:33:43.150 "cntlid": 103, 00:33:43.150 "qid": 0, 00:33:43.150 "state": "enabled", 00:33:43.150 "listen_address": { 00:33:43.150 "trtype": "TCP", 00:33:43.150 "adrfam": "IPv4", 00:33:43.150 "traddr": "10.0.0.2", 00:33:43.150 "trsvcid": "4420" 00:33:43.150 }, 00:33:43.150 "peer_address": { 00:33:43.150 "trtype": "TCP", 00:33:43.150 "adrfam": "IPv4", 00:33:43.150 "traddr": "10.0.0.1", 00:33:43.150 "trsvcid": "33108" 00:33:43.150 }, 00:33:43.150 "auth": { 00:33:43.150 "state": "completed", 00:33:43.150 "digest": "sha512", 00:33:43.150 "dhgroup": "null" 00:33:43.150 } 00:33:43.150 } 00:33:43.150 ]' 00:33:43.150 11:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:43.150 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:43.410 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:43.984 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:44.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:44.248 11:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.248 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:44.509 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:44.770 { 00:33:44.770 "cntlid": 105, 00:33:44.770 "qid": 0, 00:33:44.770 "state": "enabled", 00:33:44.770 "listen_address": { 00:33:44.770 "trtype": "TCP", 00:33:44.770 "adrfam": "IPv4", 00:33:44.770 "traddr": "10.0.0.2", 00:33:44.770 "trsvcid": "4420" 00:33:44.770 }, 00:33:44.770 "peer_address": { 00:33:44.770 "trtype": "TCP", 00:33:44.770 "adrfam": "IPv4", 00:33:44.770 "traddr": "10.0.0.1", 00:33:44.770 "trsvcid": "33120" 00:33:44.770 }, 00:33:44.770 "auth": { 00:33:44.770 "state": "completed", 00:33:44.770 "digest": "sha512", 00:33:44.770 "dhgroup": "ffdhe2048" 00:33:44.770 } 00:33:44.770 } 00:33:44.770 ]' 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:44.770 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:45.032 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:45.032 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:45.032 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:45.032 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:45.032 11:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:45.292 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:45.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:45.865 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.127 11:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:46.389 00:33:46.389 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:46.389 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:46.389 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:46.650 { 00:33:46.650 "cntlid": 107, 00:33:46.650 "qid": 0, 00:33:46.650 "state": "enabled", 00:33:46.650 "listen_address": { 00:33:46.650 "trtype": "TCP", 00:33:46.650 "adrfam": "IPv4", 00:33:46.650 "traddr": "10.0.0.2", 00:33:46.650 "trsvcid": "4420" 00:33:46.650 }, 00:33:46.650 "peer_address": { 00:33:46.650 "trtype": "TCP", 00:33:46.650 "adrfam": "IPv4", 00:33:46.650 "traddr": "10.0.0.1", 00:33:46.650 "trsvcid": "33156" 00:33:46.650 }, 00:33:46.650 "auth": { 00:33:46.650 "state": "completed", 00:33:46.650 "digest": "sha512", 00:33:46.650 "dhgroup": "ffdhe2048" 00:33:46.650 } 00:33:46.650 } 00:33:46.650 ]' 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:46.650 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:46.911 11:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:47.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.854 11:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:47.855 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:47.855 11:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:48.115 00:33:48.115 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:48.115 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:48.115 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:48.375 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:48.376 { 00:33:48.376 "cntlid": 109, 00:33:48.376 "qid": 0, 00:33:48.376 "state": "enabled", 00:33:48.376 "listen_address": { 00:33:48.376 "trtype": "TCP", 00:33:48.376 "adrfam": "IPv4", 00:33:48.376 "traddr": "10.0.0.2", 00:33:48.376 "trsvcid": "4420" 00:33:48.376 }, 00:33:48.376 "peer_address": { 00:33:48.376 "trtype": "TCP", 00:33:48.376 "adrfam": "IPv4", 00:33:48.376 "traddr": "10.0.0.1", 00:33:48.376 "trsvcid": "33186" 00:33:48.376 }, 00:33:48.376 "auth": { 00:33:48.376 "state": "completed", 00:33:48.376 "digest": "sha512", 00:33:48.376 "dhgroup": "ffdhe2048" 00:33:48.376 } 00:33:48.376 } 00:33:48.376 ]' 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:48.376 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:48.637 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:48.637 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:48.637 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:48.637 11:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:49.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:49.580 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:49.841 00:33:49.841 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:49.841 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:49.841 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:50.101 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:50.101 11:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:50.101 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:50.101 11:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.101 11:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:50.101 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:50.101 { 00:33:50.101 "cntlid": 111, 00:33:50.101 "qid": 0, 00:33:50.101 "state": "enabled", 00:33:50.101 "listen_address": { 00:33:50.101 "trtype": "TCP", 00:33:50.101 "adrfam": "IPv4", 00:33:50.101 "traddr": "10.0.0.2", 00:33:50.101 "trsvcid": "4420" 00:33:50.101 }, 00:33:50.101 "peer_address": { 00:33:50.101 "trtype": "TCP", 00:33:50.101 "adrfam": "IPv4", 00:33:50.101 "traddr": "10.0.0.1", 00:33:50.101 "trsvcid": "36132" 00:33:50.101 }, 00:33:50.101 "auth": { 00:33:50.101 "state": "completed", 00:33:50.101 "digest": "sha512", 00:33:50.101 "dhgroup": "ffdhe2048" 00:33:50.101 } 00:33:50.101 } 00:33:50.101 ]' 00:33:50.101 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:50.101 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:50.101 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:50.361 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:50.361 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:50.361 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:50.361 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:50.361 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:50.621 11:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:51.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:51.192 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.453 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.454 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.454 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:51.714 00:33:51.714 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:51.714 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:51.714 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.017 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:52.017 { 00:33:52.017 "cntlid": 113, 00:33:52.017 "qid": 0, 00:33:52.017 "state": "enabled", 00:33:52.017 "listen_address": { 00:33:52.017 "trtype": "TCP", 00:33:52.017 "adrfam": "IPv4", 00:33:52.017 "traddr": "10.0.0.2", 00:33:52.017 "trsvcid": "4420" 00:33:52.017 }, 00:33:52.017 "peer_address": { 00:33:52.018 "trtype": "TCP", 00:33:52.018 "adrfam": "IPv4", 00:33:52.018 "traddr": "10.0.0.1", 00:33:52.018 "trsvcid": "36154" 00:33:52.018 }, 00:33:52.018 "auth": { 00:33:52.018 "state": "completed", 00:33:52.018 "digest": "sha512", 00:33:52.018 "dhgroup": "ffdhe3072" 00:33:52.018 } 00:33:52.018 } 00:33:52.018 ]' 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:52.018 11:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:52.300 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:33:52.871 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:52.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:52.871 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:52.871 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.871 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.132 11:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.132 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:53.132 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:53.132 11:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:53.132 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:53.391 00:33:53.391 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:53.391 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:53.391 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:53.651 { 00:33:53.651 "cntlid": 115, 00:33:53.651 "qid": 0, 00:33:53.651 "state": "enabled", 00:33:53.651 "listen_address": { 00:33:53.651 "trtype": "TCP", 00:33:53.651 "adrfam": "IPv4", 00:33:53.651 "traddr": "10.0.0.2", 00:33:53.651 "trsvcid": "4420" 00:33:53.651 }, 00:33:53.651 "peer_address": { 00:33:53.651 "trtype": "TCP", 00:33:53.651 "adrfam": "IPv4", 00:33:53.651 "traddr": "10.0.0.1", 00:33:53.651 "trsvcid": "36182" 00:33:53.651 }, 00:33:53.651 "auth": { 00:33:53.651 "state": "completed", 00:33:53.651 "digest": "sha512", 00:33:53.651 "dhgroup": "ffdhe3072" 00:33:53.651 } 00:33:53.651 } 00:33:53.651 ]' 00:33:53.651 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:53.911 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:54.171 11:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:54.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:54.741 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.001 11:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:55.262 00:33:55.262 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:55.262 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:55.262 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:55.522 { 00:33:55.522 "cntlid": 117, 00:33:55.522 "qid": 0, 00:33:55.522 "state": "enabled", 00:33:55.522 "listen_address": { 00:33:55.522 "trtype": "TCP", 00:33:55.522 "adrfam": "IPv4", 00:33:55.522 "traddr": "10.0.0.2", 00:33:55.522 "trsvcid": "4420" 00:33:55.522 }, 00:33:55.522 "peer_address": { 00:33:55.522 "trtype": "TCP", 00:33:55.522 "adrfam": "IPv4", 00:33:55.522 "traddr": "10.0.0.1", 00:33:55.522 "trsvcid": "36210" 00:33:55.522 }, 00:33:55.522 "auth": { 00:33:55.522 "state": "completed", 00:33:55.522 "digest": "sha512", 00:33:55.522 "dhgroup": "ffdhe3072" 00:33:55.522 } 00:33:55.522 } 00:33:55.522 ]' 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:55.522 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:55.784 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:55.784 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:55.784 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:55.784 11:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:56.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:56.728 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:33:56.988 00:33:56.988 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:56.988 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:56.988 11:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:57.249 { 00:33:57.249 "cntlid": 119, 00:33:57.249 "qid": 0, 00:33:57.249 "state": "enabled", 00:33:57.249 "listen_address": { 00:33:57.249 "trtype": "TCP", 00:33:57.249 "adrfam": "IPv4", 00:33:57.249 "traddr": "10.0.0.2", 00:33:57.249 "trsvcid": "4420" 00:33:57.249 }, 00:33:57.249 "peer_address": { 00:33:57.249 "trtype": "TCP", 00:33:57.249 "adrfam": "IPv4", 00:33:57.249 "traddr": "10.0.0.1", 00:33:57.249 "trsvcid": "36234" 00:33:57.249 }, 00:33:57.249 "auth": { 00:33:57.249 "state": "completed", 00:33:57.249 "digest": "sha512", 00:33:57.249 "dhgroup": "ffdhe3072" 00:33:57.249 } 00:33:57.249 } 00:33:57.249 ]' 00:33:57.249 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:57.510 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:57.771 11:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:58.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.343 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.605 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:58.865 00:33:58.865 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:33:58.865 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:58.865 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:33:59.126 { 00:33:59.126 "cntlid": 121, 00:33:59.126 "qid": 0, 00:33:59.126 "state": "enabled", 00:33:59.126 "listen_address": { 00:33:59.126 "trtype": "TCP", 00:33:59.126 "adrfam": "IPv4", 00:33:59.126 "traddr": "10.0.0.2", 00:33:59.126 "trsvcid": "4420" 00:33:59.126 }, 00:33:59.126 "peer_address": { 00:33:59.126 "trtype": "TCP", 00:33:59.126 "adrfam": "IPv4", 00:33:59.126 "traddr": "10.0.0.1", 00:33:59.126 "trsvcid": "37502" 00:33:59.126 }, 00:33:59.126 "auth": { 00:33:59.126 "state": "completed", 00:33:59.126 "digest": "sha512", 00:33:59.126 "dhgroup": "ffdhe4096" 00:33:59.126 } 00:33:59.126 } 00:33:59.126 ]' 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:59.126 11:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:33:59.126 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:59.126 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:33:59.126 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:59.126 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:59.126 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:59.387 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:00.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:00.331 11:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.331 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:00.592 00:34:00.592 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:00.592 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:00.592 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:00.853 { 00:34:00.853 "cntlid": 123, 00:34:00.853 "qid": 0, 00:34:00.853 "state": "enabled", 00:34:00.853 "listen_address": { 00:34:00.853 "trtype": "TCP", 00:34:00.853 "adrfam": "IPv4", 00:34:00.853 "traddr": "10.0.0.2", 00:34:00.853 "trsvcid": "4420" 00:34:00.853 }, 00:34:00.853 "peer_address": { 00:34:00.853 "trtype": "TCP", 00:34:00.853 "adrfam": "IPv4", 00:34:00.853 "traddr": "10.0.0.1", 00:34:00.853 "trsvcid": "37512" 00:34:00.853 }, 00:34:00.853 "auth": { 00:34:00.853 "state": "completed", 00:34:00.853 "digest": "sha512", 00:34:00.853 "dhgroup": "ffdhe4096" 00:34:00.853 } 00:34:00.853 } 00:34:00.853 ]' 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:00.853 11:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:01.114 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:34:02.056 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:02.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:02.056 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:02.056 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.056 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.057 11:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.318 00:34:02.318 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:02.318 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:02.318 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:02.580 { 00:34:02.580 "cntlid": 125, 00:34:02.580 "qid": 0, 00:34:02.580 "state": "enabled", 00:34:02.580 "listen_address": { 00:34:02.580 "trtype": "TCP", 00:34:02.580 "adrfam": "IPv4", 00:34:02.580 "traddr": "10.0.0.2", 00:34:02.580 "trsvcid": "4420" 00:34:02.580 }, 00:34:02.580 "peer_address": { 00:34:02.580 "trtype": "TCP", 00:34:02.580 "adrfam": "IPv4", 00:34:02.580 "traddr": "10.0.0.1", 00:34:02.580 "trsvcid": "37544" 00:34:02.580 }, 00:34:02.580 "auth": { 00:34:02.580 "state": "completed", 00:34:02.580 "digest": "sha512", 00:34:02.580 "dhgroup": "ffdhe4096" 00:34:02.580 } 00:34:02.580 } 00:34:02.580 ]' 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:02.580 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:02.841 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:02.841 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:02.841 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:02.841 11:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:03.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.783 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:03.784 11:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:04.044 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:04.305 { 00:34:04.305 "cntlid": 127, 00:34:04.305 "qid": 0, 00:34:04.305 "state": "enabled", 00:34:04.305 "listen_address": { 00:34:04.305 "trtype": "TCP", 00:34:04.305 "adrfam": "IPv4", 00:34:04.305 "traddr": "10.0.0.2", 00:34:04.305 "trsvcid": "4420" 00:34:04.305 }, 00:34:04.305 "peer_address": { 00:34:04.305 "trtype": "TCP", 00:34:04.305 "adrfam": "IPv4", 00:34:04.305 "traddr": "10.0.0.1", 00:34:04.305 "trsvcid": "37562" 00:34:04.305 }, 00:34:04.305 "auth": { 00:34:04.305 "state": "completed", 00:34:04.305 "digest": "sha512", 00:34:04.305 "dhgroup": "ffdhe4096" 00:34:04.305 } 00:34:04.305 } 00:34:04.305 ]' 00:34:04.305 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:04.567 11:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:05.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.508 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.080 00:34:06.080 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:06.080 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:06.080 11:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:06.080 { 00:34:06.080 "cntlid": 129, 00:34:06.080 "qid": 0, 00:34:06.080 "state": "enabled", 00:34:06.080 "listen_address": { 00:34:06.080 "trtype": "TCP", 00:34:06.080 "adrfam": "IPv4", 00:34:06.080 "traddr": "10.0.0.2", 00:34:06.080 "trsvcid": "4420" 00:34:06.080 }, 00:34:06.080 "peer_address": { 00:34:06.080 "trtype": "TCP", 00:34:06.080 "adrfam": "IPv4", 00:34:06.080 "traddr": "10.0.0.1", 00:34:06.080 "trsvcid": "37586" 00:34:06.080 }, 00:34:06.080 "auth": { 00:34:06.080 "state": "completed", 00:34:06.080 "digest": "sha512", 00:34:06.080 "dhgroup": "ffdhe6144" 00:34:06.080 } 00:34:06.080 } 00:34:06.080 ]' 00:34:06.080 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:06.341 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:06.603 11:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:07.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:07.176 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.437 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.698 00:34:07.698 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:07.698 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:07.698 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:07.958 { 00:34:07.958 "cntlid": 131, 00:34:07.958 "qid": 0, 00:34:07.958 "state": "enabled", 00:34:07.958 "listen_address": { 00:34:07.958 "trtype": "TCP", 00:34:07.958 "adrfam": "IPv4", 00:34:07.958 "traddr": "10.0.0.2", 00:34:07.958 "trsvcid": "4420" 00:34:07.958 }, 00:34:07.958 "peer_address": { 00:34:07.958 "trtype": "TCP", 00:34:07.958 "adrfam": "IPv4", 00:34:07.958 "traddr": "10.0.0.1", 00:34:07.958 "trsvcid": "37628" 00:34:07.958 }, 00:34:07.958 "auth": { 00:34:07.958 "state": "completed", 00:34:07.958 "digest": "sha512", 00:34:07.958 "dhgroup": "ffdhe6144" 00:34:07.958 } 00:34:07.958 } 00:34:07.958 ]' 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:07.958 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:08.218 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:08.218 11:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:08.218 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:08.218 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:08.218 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:08.478 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:34:09.047 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:09.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:09.048 11:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.308 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.878 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:09.878 { 00:34:09.878 "cntlid": 133, 00:34:09.878 "qid": 0, 00:34:09.878 "state": "enabled", 00:34:09.878 "listen_address": { 00:34:09.878 "trtype": "TCP", 00:34:09.878 "adrfam": "IPv4", 00:34:09.878 "traddr": "10.0.0.2", 00:34:09.878 "trsvcid": "4420" 00:34:09.878 }, 00:34:09.878 "peer_address": { 00:34:09.878 "trtype": "TCP", 00:34:09.878 "adrfam": "IPv4", 00:34:09.878 "traddr": "10.0.0.1", 00:34:09.878 "trsvcid": "35584" 00:34:09.878 }, 00:34:09.878 "auth": { 00:34:09.878 "state": "completed", 00:34:09.878 "digest": "sha512", 00:34:09.878 "dhgroup": "ffdhe6144" 00:34:09.878 } 00:34:09.878 } 00:34:09.878 ]' 00:34:09.878 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:10.138 11:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:10.398 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:10.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:10.968 11:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:11.229 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:11.797 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.797 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:11.797 { 00:34:11.797 "cntlid": 135, 00:34:11.797 "qid": 0, 00:34:11.797 "state": "enabled", 00:34:11.797 "listen_address": { 00:34:11.797 "trtype": "TCP", 00:34:11.797 "adrfam": "IPv4", 00:34:11.797 "traddr": "10.0.0.2", 00:34:11.797 "trsvcid": "4420" 00:34:11.797 }, 00:34:11.797 "peer_address": { 00:34:11.797 "trtype": "TCP", 00:34:11.797 "adrfam": "IPv4", 00:34:11.797 "traddr": "10.0.0.1", 00:34:11.798 "trsvcid": "35620" 00:34:11.798 }, 00:34:11.798 "auth": { 00:34:11.798 "state": "completed", 00:34:11.798 "digest": "sha512", 00:34:11.798 "dhgroup": "ffdhe6144" 00:34:11.798 } 00:34:11.798 } 00:34:11.798 ]' 00:34:11.798 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:12.059 11:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:12.319 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:12.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:12.890 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.151 11:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.723 00:34:13.723 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:13.723 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:13.723 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:13.983 { 00:34:13.983 "cntlid": 137, 00:34:13.983 "qid": 0, 00:34:13.983 "state": "enabled", 00:34:13.983 "listen_address": { 00:34:13.983 "trtype": "TCP", 00:34:13.983 "adrfam": "IPv4", 00:34:13.983 "traddr": "10.0.0.2", 00:34:13.983 "trsvcid": "4420" 00:34:13.983 }, 00:34:13.983 "peer_address": { 00:34:13.983 "trtype": "TCP", 00:34:13.983 "adrfam": "IPv4", 00:34:13.983 "traddr": "10.0.0.1", 00:34:13.983 "trsvcid": "35654" 00:34:13.983 }, 00:34:13.983 "auth": { 00:34:13.983 "state": "completed", 00:34:13.983 "digest": "sha512", 00:34:13.983 "dhgroup": "ffdhe8192" 00:34:13.983 } 00:34:13.983 } 00:34:13.983 ]' 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:13.983 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:14.243 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:14.243 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:14.243 11:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:14.243 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:15.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:15.184 11:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.184 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.126 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:16.126 { 00:34:16.126 "cntlid": 139, 00:34:16.126 "qid": 0, 00:34:16.126 "state": "enabled", 00:34:16.126 "listen_address": { 00:34:16.126 "trtype": "TCP", 00:34:16.126 "adrfam": "IPv4", 00:34:16.126 "traddr": "10.0.0.2", 00:34:16.126 "trsvcid": "4420" 00:34:16.126 }, 00:34:16.126 "peer_address": { 00:34:16.126 "trtype": "TCP", 00:34:16.126 "adrfam": "IPv4", 00:34:16.126 "traddr": "10.0.0.1", 00:34:16.126 "trsvcid": "35692" 00:34:16.126 }, 00:34:16.126 "auth": { 00:34:16.126 "state": "completed", 00:34:16.126 "digest": "sha512", 00:34:16.126 "dhgroup": "ffdhe8192" 00:34:16.126 } 00:34:16.126 } 00:34:16.126 ]' 00:34:16.126 11:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:16.126 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:16.126 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:16.126 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:16.126 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:16.388 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:16.388 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:16.388 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:16.388 11:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NWExZjNiMWY1MTFiMTQ5NWU3MzRjMDRmMzY4MWY2OTJo1Hle: --dhchap-ctrl-secret DHHC-1:02:NTQ1YzM5Njk3YWE4M2Y5NjJmNWViNWE5NjI1ZTRlYjQwZTdkY2NkZDlhZjcwMzYx1dFAMA==: 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:17.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.399 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.977 00:34:17.977 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:17.977 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:17.977 11:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:18.238 { 00:34:18.238 "cntlid": 141, 00:34:18.238 "qid": 0, 00:34:18.238 "state": "enabled", 00:34:18.238 "listen_address": { 00:34:18.238 "trtype": "TCP", 00:34:18.238 "adrfam": "IPv4", 00:34:18.238 "traddr": "10.0.0.2", 00:34:18.238 "trsvcid": "4420" 00:34:18.238 }, 00:34:18.238 "peer_address": { 00:34:18.238 "trtype": "TCP", 00:34:18.238 "adrfam": "IPv4", 00:34:18.238 "traddr": "10.0.0.1", 00:34:18.238 "trsvcid": "35720" 00:34:18.238 }, 00:34:18.238 "auth": { 00:34:18.238 "state": "completed", 00:34:18.238 "digest": "sha512", 00:34:18.238 "dhgroup": "ffdhe8192" 00:34:18.238 } 00:34:18.238 } 00:34:18.238 ]' 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:18.238 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:18.499 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:18.499 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:18.499 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:18.499 11:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:ODQ5MTVjMzgxMmFjNjg0ZTU5Y2I2MmNlYWE5OGVhMmMwZWFhYWNmOWJlYjc0MGUy8732Tw==: --dhchap-ctrl-secret DHHC-1:01:MjU5M2QyOGEzNWI0YWM0ZjgwOWY1ODJhMTA1MjQ1NzGDBhfl: 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:19.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:19.444 11:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:20.388 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.388 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:20.388 { 00:34:20.388 "cntlid": 143, 00:34:20.388 "qid": 0, 00:34:20.388 "state": "enabled", 00:34:20.388 "listen_address": { 00:34:20.388 "trtype": "TCP", 00:34:20.388 "adrfam": "IPv4", 00:34:20.388 "traddr": "10.0.0.2", 00:34:20.388 "trsvcid": "4420" 00:34:20.388 }, 00:34:20.388 "peer_address": { 00:34:20.388 "trtype": "TCP", 00:34:20.388 "adrfam": "IPv4", 00:34:20.388 "traddr": "10.0.0.1", 00:34:20.388 "trsvcid": "36658" 00:34:20.388 }, 00:34:20.388 "auth": { 00:34:20.388 "state": "completed", 00:34:20.388 "digest": "sha512", 00:34:20.388 "dhgroup": "ffdhe8192" 00:34:20.388 } 00:34:20.388 } 00:34:20.389 ]' 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:20.389 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:20.649 11:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:21.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.593 11:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.166 00:34:22.166 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:22.166 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:22.166 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:22.427 { 00:34:22.427 "cntlid": 145, 00:34:22.427 "qid": 0, 00:34:22.427 "state": "enabled", 00:34:22.427 "listen_address": { 00:34:22.427 "trtype": "TCP", 00:34:22.427 "adrfam": "IPv4", 00:34:22.427 "traddr": "10.0.0.2", 00:34:22.427 "trsvcid": "4420" 00:34:22.427 }, 00:34:22.427 "peer_address": { 00:34:22.427 "trtype": "TCP", 00:34:22.427 "adrfam": "IPv4", 00:34:22.427 "traddr": "10.0.0.1", 00:34:22.427 "trsvcid": "36688" 00:34:22.427 }, 00:34:22.427 "auth": { 00:34:22.427 "state": "completed", 00:34:22.427 "digest": "sha512", 00:34:22.427 "dhgroup": "ffdhe8192" 00:34:22.427 } 00:34:22.427 } 00:34:22.427 ]' 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:22.427 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:22.428 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:22.688 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:22.688 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:22.688 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:22.949 11:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NjgzNzBmNzg2OWE4OWI5ZDEwOGZhNTI3MjE2YThhODc4NTAwZGY4Mjc3MmJjNGQwHdx3jA==: --dhchap-ctrl-secret DHHC-1:03:NjA5NGI2YWUyZDA2MjFjYTYzNzllNmJjNWJlY2ZlZmRmMWNlNWIyZjkwOTE1ODE4OTNmYTUwNzMxMWUwNTBiNm8Ba8A=: 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:23.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:23.521 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:23.522 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:23.522 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:34:24.093 request: 00:34:24.093 { 00:34:24.093 "name": "nvme0", 00:34:24.093 "trtype": "tcp", 00:34:24.093 "traddr": "10.0.0.2", 00:34:24.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:24.093 "adrfam": "ipv4", 00:34:24.093 "trsvcid": "4420", 00:34:24.093 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:24.093 "dhchap_key": "key2", 00:34:24.093 "method": "bdev_nvme_attach_controller", 00:34:24.093 "req_id": 1 00:34:24.093 } 00:34:24.093 Got JSON-RPC error response 00:34:24.093 response: 00:34:24.093 { 00:34:24.093 "code": -5, 00:34:24.093 "message": "Input/output error" 00:34:24.093 } 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.093 11:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:24.093 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:24.665 request: 00:34:24.665 { 00:34:24.665 "name": "nvme0", 00:34:24.665 "trtype": "tcp", 00:34:24.665 "traddr": "10.0.0.2", 00:34:24.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:24.665 "adrfam": "ipv4", 00:34:24.665 "trsvcid": "4420", 00:34:24.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:24.665 "dhchap_key": "key1", 00:34:24.665 "dhchap_ctrlr_key": "ckey2", 00:34:24.665 "method": "bdev_nvme_attach_controller", 00:34:24.665 "req_id": 1 00:34:24.665 } 00:34:24.665 Got JSON-RPC error response 00:34:24.665 response: 00:34:24.665 { 00:34:24.665 "code": -5, 00:34:24.665 "message": "Input/output error" 00:34:24.665 } 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.665 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.666 11:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.236 request: 00:34:25.236 { 00:34:25.236 "name": "nvme0", 00:34:25.236 "trtype": "tcp", 00:34:25.236 "traddr": "10.0.0.2", 00:34:25.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:25.237 "adrfam": "ipv4", 00:34:25.237 "trsvcid": "4420", 00:34:25.237 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:25.237 "dhchap_key": "key1", 00:34:25.237 "dhchap_ctrlr_key": "ckey1", 00:34:25.237 "method": "bdev_nvme_attach_controller", 00:34:25.237 "req_id": 1 00:34:25.237 } 00:34:25.237 Got JSON-RPC error response 00:34:25.237 response: 00:34:25.237 { 00:34:25.237 "code": -5, 00:34:25.237 "message": "Input/output error" 00:34:25.237 } 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.237 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2323650 ']' 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2323650' 00:34:25.498 killing process with pid 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2323650 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2352599 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2352599 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2352599 ']' 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:25.498 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2352599 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2352599 ']' 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:25.760 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:26.022 11:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:26.966 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:34:26.966 { 00:34:26.966 "cntlid": 1, 00:34:26.966 "qid": 0, 00:34:26.966 "state": "enabled", 00:34:26.966 "listen_address": { 00:34:26.966 "trtype": "TCP", 00:34:26.966 "adrfam": "IPv4", 00:34:26.966 "traddr": "10.0.0.2", 00:34:26.966 "trsvcid": "4420" 00:34:26.966 }, 00:34:26.966 "peer_address": { 00:34:26.966 "trtype": "TCP", 00:34:26.966 "adrfam": "IPv4", 00:34:26.966 "traddr": "10.0.0.1", 00:34:26.966 "trsvcid": "36742" 00:34:26.966 }, 00:34:26.966 "auth": { 00:34:26.966 "state": "completed", 00:34:26.966 "digest": "sha512", 00:34:26.966 "dhgroup": "ffdhe8192" 00:34:26.966 } 00:34:26.966 } 00:34:26.966 ]' 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:26.966 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:34:27.227 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:27.227 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:34:27.227 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:27.227 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:27.227 11:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:27.227 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:MzJhNTUwNjMxZWE4YWY2ZjVhNmI5NzRlYmM2MGM3Mzg2YTBmNTBlY2NlNzRlMWZmZTc3MDJjZjFmNzg4ZjE3NlxjVxY=: 00:34:28.169 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:28.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:28.169 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:28.169 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.169 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.169 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:34:28.170 11:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.170 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.431 request: 00:34:28.431 { 00:34:28.431 "name": "nvme0", 00:34:28.431 "trtype": "tcp", 00:34:28.431 "traddr": "10.0.0.2", 00:34:28.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:28.431 "adrfam": "ipv4", 00:34:28.431 "trsvcid": "4420", 00:34:28.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:28.431 "dhchap_key": "key3", 00:34:28.431 "method": "bdev_nvme_attach_controller", 00:34:28.431 "req_id": 1 00:34:28.431 } 00:34:28.431 Got JSON-RPC error response 00:34:28.431 response: 00:34:28.431 { 00:34:28.431 "code": -5, 00:34:28.431 "message": "Input/output error" 00:34:28.431 } 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:28.431 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.692 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:34:28.953 request: 00:34:28.953 { 00:34:28.953 "name": "nvme0", 00:34:28.953 "trtype": "tcp", 00:34:28.953 "traddr": "10.0.0.2", 00:34:28.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:28.953 "adrfam": "ipv4", 00:34:28.953 "trsvcid": "4420", 00:34:28.953 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:28.953 "dhchap_key": "key3", 00:34:28.953 "method": "bdev_nvme_attach_controller", 00:34:28.953 "req_id": 1 00:34:28.953 } 00:34:28.953 Got JSON-RPC error response 00:34:28.953 response: 00:34:28.953 { 00:34:28.953 "code": -5, 00:34:28.953 "message": "Input/output error" 00:34:28.953 } 00:34:28.953 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:28.953 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:28.953 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:28.953 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:28.954 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:29.215 11:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:29.216 11:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:29.476 request: 00:34:29.476 { 00:34:29.476 "name": "nvme0", 00:34:29.476 "trtype": "tcp", 00:34:29.476 "traddr": "10.0.0.2", 00:34:29.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:34:29.476 "adrfam": "ipv4", 00:34:29.476 "trsvcid": "4420", 00:34:29.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:29.476 "dhchap_key": "key0", 00:34:29.476 "dhchap_ctrlr_key": "key1", 00:34:29.476 "method": "bdev_nvme_attach_controller", 00:34:29.476 "req_id": 1 00:34:29.476 } 00:34:29.476 Got JSON-RPC error response 00:34:29.476 response: 00:34:29.476 { 00:34:29.476 "code": -5, 00:34:29.476 "message": "Input/output error" 00:34:29.476 } 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:29.476 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:34:29.737 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:29.737 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2323804 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2323804 ']' 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2323804 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2323804 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2323804' 00:34:29.999 killing process with pid 2323804 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2323804 00:34:29.999 11:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2323804 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:30.260 rmmod nvme_tcp 00:34:30.260 rmmod nvme_fabrics 00:34:30.260 rmmod nvme_keyring 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2352599 ']' 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2352599 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2352599 ']' 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2352599 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:30.260 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2352599 00:34:30.521 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2352599' 00:34:30.522 killing process with pid 2352599 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2352599 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2352599 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.522 11:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.070 11:43:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:33.070 11:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GeB /tmp/spdk.key-sha256.yUm /tmp/spdk.key-sha384.oDZ /tmp/spdk.key-sha512.AwT /tmp/spdk.key-sha512.TDj /tmp/spdk.key-sha384.syK /tmp/spdk.key-sha256.s5c '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:34:33.070 00:34:33.070 real 2m34.808s 00:34:33.070 user 5m54.008s 00:34:33.070 sys 0m20.529s 00:34:33.070 11:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:33.070 11:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:33.070 ************************************ 00:34:33.070 END TEST nvmf_auth_target 00:34:33.070 ************************************ 00:34:33.070 11:43:01 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:34:33.070 11:43:01 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:33.070 11:43:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:34:33.070 11:43:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:33.070 11:43:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:33.070 ************************************ 00:34:33.070 START TEST nvmf_bdevio_no_huge 00:34:33.070 ************************************ 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:33.070 * Looking for test storage... 00:34:33.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:33.070 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:34:33.071 11:43:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:39.661 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:39.662 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:39.662 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:39.662 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:39.662 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.662 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:34:39.924 00:34:39.924 --- 10.0.0.2 ping statistics --- 00:34:39.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.924 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:34:39.924 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:34:39.924 00:34:39.924 --- 10.0.0.1 ping statistics --- 00:34:39.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.924 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:39.925 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2357651 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2357651 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 2357651 ']' 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:40.186 11:43:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:40.186 [2024-06-10 11:43:08.985323] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:34:40.186 [2024-06-10 11:43:08.985391] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:34:40.186 [2024-06-10 11:43:09.079764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:40.447 [2024-06-10 11:43:09.185043] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.447 [2024-06-10 11:43:09.185095] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.447 [2024-06-10 11:43:09.185103] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.447 [2024-06-10 11:43:09.185110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.447 [2024-06-10 11:43:09.185116] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.447 [2024-06-10 11:43:09.185283] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:34:40.447 [2024-06-10 11:43:09.185442] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:34:40.447 [2024-06-10 11:43:09.185603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:40.447 [2024-06-10 11:43:09.185604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 [2024-06-10 11:43:09.932601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 Malloc0 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:41.021 [2024-06-10 11:43:09.986406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.021 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:41.282 { 00:34:41.282 "params": { 00:34:41.282 "name": "Nvme$subsystem", 00:34:41.282 "trtype": "$TEST_TRANSPORT", 00:34:41.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.282 "adrfam": "ipv4", 00:34:41.282 "trsvcid": "$NVMF_PORT", 00:34:41.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.282 "hdgst": ${hdgst:-false}, 00:34:41.282 "ddgst": ${ddgst:-false} 00:34:41.282 }, 00:34:41.282 "method": "bdev_nvme_attach_controller" 00:34:41.282 } 00:34:41.282 EOF 00:34:41.282 )") 00:34:41.282 11:43:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:34:41.282 11:43:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:34:41.282 11:43:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:34:41.282 11:43:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:41.282 "params": { 00:34:41.282 "name": "Nvme1", 00:34:41.282 "trtype": "tcp", 00:34:41.282 "traddr": "10.0.0.2", 00:34:41.282 "adrfam": "ipv4", 00:34:41.282 "trsvcid": "4420", 00:34:41.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.282 "hdgst": false, 00:34:41.282 "ddgst": false 00:34:41.282 }, 00:34:41.282 "method": "bdev_nvme_attach_controller" 00:34:41.282 }' 00:34:41.282 [2024-06-10 11:43:10.039905] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:34:41.282 [2024-06-10 11:43:10.039981] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2357931 ] 00:34:41.282 [2024-06-10 11:43:10.111519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:41.282 [2024-06-10 11:43:10.209163] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.282 [2024-06-10 11:43:10.209302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.282 [2024-06-10 11:43:10.209306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.853 I/O targets: 00:34:41.853 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:41.853 00:34:41.853 00:34:41.853 CUnit - A unit testing framework for C - Version 2.1-3 00:34:41.853 http://cunit.sourceforge.net/ 00:34:41.853 00:34:41.853 00:34:41.853 Suite: bdevio tests on: Nvme1n1 00:34:41.853 Test: blockdev write read block ...passed 00:34:41.853 Test: blockdev write zeroes read block ...passed 00:34:41.853 Test: blockdev write zeroes read no split ...passed 00:34:41.853 Test: blockdev write zeroes read split ...passed 00:34:41.853 Test: blockdev write zeroes read split partial ...passed 00:34:41.853 Test: blockdev reset ...[2024-06-10 11:43:10.638946] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:41.853 [2024-06-10 11:43:10.638999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fbaf0 (9): Bad file descriptor 00:34:41.853 [2024-06-10 11:43:10.655592] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:41.853 passed 00:34:41.853 Test: blockdev write read 8 blocks ...passed 00:34:41.853 Test: blockdev write read size > 128k ...passed 00:34:41.853 Test: blockdev write read invalid size ...passed 00:34:41.853 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:41.853 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:41.853 Test: blockdev write read max offset ...passed 00:34:41.853 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:41.853 Test: blockdev writev readv 8 blocks ...passed 00:34:42.114 Test: blockdev writev readv 30 x 1block ...passed 00:34:42.114 Test: blockdev writev readv block ...passed 00:34:42.114 Test: blockdev writev readv size > 128k ...passed 00:34:42.114 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:42.114 Test: blockdev comparev and writev ...[2024-06-10 11:43:10.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.881344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.881355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.881871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.881879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.881888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.881893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.882396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.882413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.882418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.882917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.882925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.882934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:42.114 [2024-06-10 11:43:10.882939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:42.114 passed 00:34:42.114 Test: blockdev nvme passthru rw ...passed 00:34:42.114 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:43:10.967371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:42.114 [2024-06-10 11:43:10.967380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.967837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:42.114 [2024-06-10 11:43:10.967844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.968160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:42.114 [2024-06-10 11:43:10.968167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:42.114 [2024-06-10 11:43:10.968479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:42.114 [2024-06-10 11:43:10.968485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:42.114 passed 00:34:42.114 Test: blockdev nvme admin passthru ...passed 00:34:42.114 Test: blockdev copy ...passed 00:34:42.114 00:34:42.114 Run Summary: Type Total Ran Passed Failed Inactive 00:34:42.114 suites 1 1 n/a 0 0 00:34:42.114 tests 23 23 23 0 0 00:34:42.114 asserts 152 152 152 0 n/a 00:34:42.114 00:34:42.114 Elapsed time = 1.008 seconds 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:42.374 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:42.374 rmmod nvme_tcp 00:34:42.374 rmmod nvme_fabrics 00:34:42.374 rmmod nvme_keyring 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2357651 ']' 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2357651 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 2357651 ']' 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 2357651 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2357651 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2357651' 00:34:42.633 killing process with pid 2357651 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 2357651 00:34:42.633 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 2357651 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.893 11:43:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.435 11:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:45.435 00:34:45.435 real 0m12.237s 00:34:45.435 user 0m14.297s 00:34:45.435 sys 0m6.354s 00:34:45.435 11:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:45.435 11:43:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:45.435 ************************************ 00:34:45.435 END TEST nvmf_bdevio_no_huge 00:34:45.435 ************************************ 00:34:45.435 11:43:13 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:45.435 11:43:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:45.435 11:43:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:45.435 11:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.435 ************************************ 00:34:45.435 START TEST nvmf_tls 00:34:45.435 ************************************ 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:45.435 * Looking for test storage... 00:34:45.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:45.435 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:45.436 11:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.436 11:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:45.436 11:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:45.436 11:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:34:45.436 11:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:52.094 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:52.094 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:52.094 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:52.094 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:52.094 11:43:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.094 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.355 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.355 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:52.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:34:52.356 00:34:52.356 --- 10.0.0.2 ping statistics --- 00:34:52.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.356 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:34:52.356 00:34:52.356 --- 10.0.0.1 ping statistics --- 00:34:52.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.356 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2362337 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2362337 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2362337 ']' 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:52.356 11:43:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:52.356 [2024-06-10 11:43:21.171316] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:34:52.356 [2024-06-10 11:43:21.171382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.356 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.356 [2024-06-10 11:43:21.242786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.356 [2024-06-10 11:43:21.314814] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.356 [2024-06-10 11:43:21.314854] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.356 [2024-06-10 11:43:21.314861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.356 [2024-06-10 11:43:21.314868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.356 [2024-06-10 11:43:21.314873] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.356 [2024-06-10 11:43:21.314895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:34:53.298 true 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:34:53.298 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:53.558 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:34:53.558 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:34:53.558 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:53.818 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:53.818 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:34:53.818 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:34:53.818 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:34:53.818 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:34:54.077 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:54.077 11:43:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:34:54.337 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:34:54.598 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:54.598 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:34:54.858 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:34:54.858 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:34:54.858 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:34:54.858 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:54.858 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:55.119 11:43:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.PEGsycjEm7 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.SAmtMKKZcU 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.PEGsycjEm7 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SAmtMKKZcU 00:34:55.119 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:55.380 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:34:55.640 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.PEGsycjEm7 00:34:55.640 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PEGsycjEm7 00:34:55.640 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:55.640 [2024-06-10 11:43:24.579763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.640 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:55.901 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:34:56.162 [2024-06-10 11:43:24.908572] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:56.162 [2024-06-10 11:43:24.908788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.162 11:43:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:56.162 malloc0 00:34:56.162 11:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:56.423 11:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PEGsycjEm7 00:34:56.423 [2024-06-10 11:43:25.312319] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:56.423 11:43:25 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PEGsycjEm7 00:34:56.423 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.660 Initializing NVMe Controllers 00:35:08.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:08.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:08.660 Initialization complete. Launching workers. 00:35:08.660 ======================================================== 00:35:08.661 Latency(us) 00:35:08.661 Device Information : IOPS MiB/s Average min max 00:35:08.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13453.70 52.55 4757.70 1035.47 5333.76 00:35:08.661 ======================================================== 00:35:08.661 Total : 13453.70 52.55 4757.70 1035.47 5333.76 00:35:08.661 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PEGsycjEm7 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PEGsycjEm7' 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2365071 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2365071 /var/tmp/bdevperf.sock 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2365071 ']' 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:08.661 [2024-06-10 11:43:35.478739] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:08.661 [2024-06-10 11:43:35.478792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2365071 ] 00:35:08.661 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.661 [2024-06-10 11:43:35.527849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.661 [2024-06-10 11:43:35.579956] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PEGsycjEm7 00:35:08.661 [2024-06-10 11:43:35.836230] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:08.661 [2024-06-10 11:43:35.836286] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:08.661 TLSTESTn1 00:35:08.661 11:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:08.661 Running I/O for 10 seconds... 00:35:18.667 00:35:18.667 Latency(us) 00:35:18.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.667 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:18.667 Verification LBA range: start 0x0 length 0x2000 00:35:18.667 TLSTESTn1 : 10.02 2178.92 8.51 0.00 0.00 58685.51 5597.87 90876.59 00:35:18.667 =================================================================================================================== 00:35:18.667 Total : 2178.92 8.51 0.00 0.00 58685.51 5597.87 90876.59 00:35:18.667 0 00:35:18.667 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:18.667 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2365071 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2365071 ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2365071 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2365071 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2365071' 00:35:18.668 killing process with pid 2365071 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2365071 00:35:18.668 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.668 00:35:18.668 Latency(us) 00:35:18.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.668 =================================================================================================================== 00:35:18.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.668 [2024-06-10 11:43:46.143320] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2365071 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SAmtMKKZcU 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SAmtMKKZcU 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SAmtMKKZcU 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SAmtMKKZcU' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2367197 /var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2367197 ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.668 [2024-06-10 11:43:46.289874] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:18.668 [2024-06-10 11:43:46.289930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367197 ] 00:35:18.668 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.668 [2024-06-10 11:43:46.342667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.668 [2024-06-10 11:43:46.394327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SAmtMKKZcU 00:35:18.668 [2024-06-10 11:43:46.662096] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:18.668 [2024-06-10 11:43:46.662154] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:18.668 [2024-06-10 11:43:46.673912] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.668 [2024-06-10 11:43:46.674177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1248de0 (107): Transport endpoint is not connected 00:35:18.668 [2024-06-10 11:43:46.675173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1248de0 (9): Bad file descriptor 00:35:18.668 [2024-06-10 11:43:46.676174] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.668 [2024-06-10 11:43:46.676181] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:18.668 [2024-06-10 11:43:46.676189] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.668 request: 00:35:18.668 { 00:35:18.668 "name": "TLSTEST", 00:35:18.668 "trtype": "tcp", 00:35:18.668 "traddr": "10.0.0.2", 00:35:18.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.668 "adrfam": "ipv4", 00:35:18.668 "trsvcid": "4420", 00:35:18.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.668 "psk": "/tmp/tmp.SAmtMKKZcU", 00:35:18.668 "method": "bdev_nvme_attach_controller", 00:35:18.668 "req_id": 1 00:35:18.668 } 00:35:18.668 Got JSON-RPC error response 00:35:18.668 response: 00:35:18.668 { 00:35:18.668 "code": -5, 00:35:18.668 "message": "Input/output error" 00:35:18.668 } 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2367197 ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367197' 00:35:18.668 killing process with pid 2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2367197 00:35:18.668 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.668 00:35:18.668 Latency(us) 00:35:18.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.668 =================================================================================================================== 00:35:18.668 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:18.668 [2024-06-10 11:43:46.744350] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2367197 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PEGsycjEm7 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PEGsycjEm7 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PEGsycjEm7 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PEGsycjEm7' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2367421 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2367421 /var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2367421 ']' 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.668 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:18.669 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.669 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:18.669 11:43:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.669 [2024-06-10 11:43:46.870894] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:18.669 [2024-06-10 11:43:46.870940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367421 ] 00:35:18.669 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.669 [2024-06-10 11:43:46.912686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.669 [2024-06-10 11:43:46.964382] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.PEGsycjEm7 00:35:18.669 [2024-06-10 11:43:47.244234] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:18.669 [2024-06-10 11:43:47.244292] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:18.669 [2024-06-10 11:43:47.255256] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:35:18.669 [2024-06-10 11:43:47.255279] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:35:18.669 [2024-06-10 11:43:47.255301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.669 [2024-06-10 11:43:47.255331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe48de0 (107): Transport endpoint is not connected 00:35:18.669 [2024-06-10 11:43:47.256307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe48de0 (9): Bad file descriptor 00:35:18.669 [2024-06-10 11:43:47.257309] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:18.669 [2024-06-10 11:43:47.257315] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:18.669 [2024-06-10 11:43:47.257323] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:18.669 request: 00:35:18.669 { 00:35:18.669 "name": "TLSTEST", 00:35:18.669 "trtype": "tcp", 00:35:18.669 "traddr": "10.0.0.2", 00:35:18.669 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:18.669 "adrfam": "ipv4", 00:35:18.669 "trsvcid": "4420", 00:35:18.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.669 "psk": "/tmp/tmp.PEGsycjEm7", 00:35:18.669 "method": "bdev_nvme_attach_controller", 00:35:18.669 "req_id": 1 00:35:18.669 } 00:35:18.669 Got JSON-RPC error response 00:35:18.669 response: 00:35:18.669 { 00:35:18.669 "code": -5, 00:35:18.669 "message": "Input/output error" 00:35:18.669 } 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2367421 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2367421 ']' 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2367421 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367421 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367421' 00:35:18.669 killing process with pid 2367421 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2367421 00:35:18.669 Received shutdown signal, test time was about 10.000000 seconds 00:35:18.669 00:35:18.669 Latency(us) 00:35:18.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.669 =================================================================================================================== 00:35:18.669 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:18.669 [2024-06-10 11:43:47.326462] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2367421 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PEGsycjEm7 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PEGsycjEm7 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PEGsycjEm7 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PEGsycjEm7' 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2367432 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2367432 /var/tmp/bdevperf.sock 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2367432 ']' 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.669 [2024-06-10 11:43:47.453043] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:18.669 [2024-06-10 11:43:47.453088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367432 ] 00:35:18.669 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.669 [2024-06-10 11:43:47.494702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.669 [2024-06-10 11:43:47.546332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:18.669 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PEGsycjEm7 00:35:18.930 [2024-06-10 11:43:47.826222] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:18.930 [2024-06-10 11:43:47.826277] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:18.930 [2024-06-10 11:43:47.836677] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:18.930 [2024-06-10 11:43:47.836698] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:18.930 [2024-06-10 11:43:47.836721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:18.930 [2024-06-10 11:43:47.837198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198ede0 (107): Transport endpoint is not connected 00:35:18.930 [2024-06-10 11:43:47.838194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198ede0 (9): Bad file descriptor 00:35:18.930 [2024-06-10 11:43:47.839195] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:35:18.930 [2024-06-10 11:43:47.839202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:18.930 [2024-06-10 11:43:47.839209] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:35:18.930 request: 00:35:18.930 { 00:35:18.930 "name": "TLSTEST", 00:35:18.930 "trtype": "tcp", 00:35:18.930 "traddr": "10.0.0.2", 00:35:18.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.930 "adrfam": "ipv4", 00:35:18.930 "trsvcid": "4420", 00:35:18.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:18.930 "psk": "/tmp/tmp.PEGsycjEm7", 00:35:18.930 "method": "bdev_nvme_attach_controller", 00:35:18.930 "req_id": 1 00:35:18.930 } 00:35:18.930 Got JSON-RPC error response 00:35:18.930 response: 00:35:18.930 { 00:35:18.930 "code": -5, 00:35:18.930 "message": "Input/output error" 00:35:18.930 } 00:35:18.930 11:43:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2367432 00:35:18.930 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2367432 ']' 00:35:18.930 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2367432 00:35:18.931 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:18.931 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:18.931 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367432 00:35:19.191 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:19.191 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:19.191 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367432' 00:35:19.191 killing process with pid 2367432 00:35:19.191 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2367432 00:35:19.191 Received shutdown signal, test time was about 10.000000 seconds 00:35:19.191 00:35:19.191 Latency(us) 00:35:19.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.191 =================================================================================================================== 00:35:19.191 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:19.191 [2024-06-10 11:43:47.907184] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:19.191 11:43:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2367432 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:19.191 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2367577 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2367577 /var/tmp/bdevperf.sock 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2367577 ']' 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:19.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:19.192 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.192 [2024-06-10 11:43:48.034683] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:19.192 [2024-06-10 11:43:48.034730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367577 ] 00:35:19.192 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.192 [2024-06-10 11:43:48.076795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.192 [2024-06-10 11:43:48.128504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.452 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:19.452 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:19.452 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:19.452 [2024-06-10 11:43:48.418190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:19.452 [2024-06-10 11:43:48.420007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29820 (9): Bad file descriptor 00:35:19.452 [2024-06-10 11:43:48.421006] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:19.452 [2024-06-10 11:43:48.421013] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:35:19.453 [2024-06-10 11:43:48.421020] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:19.713 request: 00:35:19.713 { 00:35:19.713 "name": "TLSTEST", 00:35:19.713 "trtype": "tcp", 00:35:19.713 "traddr": "10.0.0.2", 00:35:19.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.713 "adrfam": "ipv4", 00:35:19.713 "trsvcid": "4420", 00:35:19.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.713 "method": "bdev_nvme_attach_controller", 00:35:19.713 "req_id": 1 00:35:19.713 } 00:35:19.713 Got JSON-RPC error response 00:35:19.713 response: 00:35:19.713 { 00:35:19.713 "code": -5, 00:35:19.713 "message": "Input/output error" 00:35:19.713 } 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2367577 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2367577 ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2367577 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367577 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367577' 00:35:19.713 killing process with pid 2367577 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2367577 00:35:19.713 Received shutdown signal, test time was about 10.000000 seconds 00:35:19.713 00:35:19.713 Latency(us) 00:35:19.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.713 =================================================================================================================== 00:35:19.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2367577 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2362337 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2362337 ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2362337 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2362337 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2362337' 00:35:19.713 killing process with pid 2362337 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2362337 00:35:19.713 [2024-06-10 11:43:48.649429] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:19.713 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2362337 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.4TOX5ikiIP 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.4TOX5ikiIP 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2367786 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2367786 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2367786 ']' 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:19.974 11:43:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.974 [2024-06-10 11:43:48.907117] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:19.974 [2024-06-10 11:43:48.907168] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.974 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.235 [2024-06-10 11:43:48.971557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.235 [2024-06-10 11:43:49.034193] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.235 [2024-06-10 11:43:49.034229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.235 [2024-06-10 11:43:49.034236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.235 [2024-06-10 11:43:49.034242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.235 [2024-06-10 11:43:49.034248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.235 [2024-06-10 11:43:49.034272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4TOX5ikiIP 00:35:20.235 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:20.495 [2024-06-10 11:43:49.339499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.495 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:20.755 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:21.015 [2024-06-10 11:43:49.736497] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:21.015 [2024-06-10 11:43:49.736718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.015 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:21.015 malloc0 00:35:21.015 11:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:21.276 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:21.537 [2024-06-10 11:43:50.332784] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4TOX5ikiIP 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4TOX5ikiIP' 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2368146 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2368146 /var/tmp/bdevperf.sock 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2368146 ']' 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:21.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:21.537 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:21.537 [2024-06-10 11:43:50.395864] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:21.537 [2024-06-10 11:43:50.395912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2368146 ] 00:35:21.537 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.537 [2024-06-10 11:43:50.445795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.537 [2024-06-10 11:43:50.498625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:21.798 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:21.798 11:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:21.798 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:21.798 [2024-06-10 11:43:50.766317] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:21.798 [2024-06-10 11:43:50.766383] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:22.058 TLSTESTn1 00:35:22.058 11:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:22.058 Running I/O for 10 seconds... 00:35:32.061 00:35:32.061 Latency(us) 00:35:32.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.061 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:32.061 Verification LBA range: start 0x0 length 0x2000 00:35:32.061 TLSTESTn1 : 10.03 4871.02 19.03 0.00 0.00 26227.57 5652.48 46967.47 00:35:32.061 =================================================================================================================== 00:35:32.061 Total : 4871.02 19.03 0.00 0.00 26227.57 5652.48 46967.47 00:35:32.061 0 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2368146 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2368146 ']' 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2368146 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2368146 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2368146' 00:35:32.322 killing process with pid 2368146 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2368146 00:35:32.322 Received shutdown signal, test time was about 10.000000 seconds 00:35:32.322 00:35:32.322 Latency(us) 00:35:32.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.322 =================================================================================================================== 00:35:32.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.322 [2024-06-10 11:44:01.097473] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2368146 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.4TOX5ikiIP 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4TOX5ikiIP 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4TOX5ikiIP 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4TOX5ikiIP 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4TOX5ikiIP' 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2370162 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2370162 /var/tmp/bdevperf.sock 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2370162 ']' 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:32.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:32.322 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:32.322 [2024-06-10 11:44:01.274079] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:32.322 [2024-06-10 11:44:01.274133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370162 ] 00:35:32.584 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.584 [2024-06-10 11:44:01.325176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.584 [2024-06-10 11:44:01.375446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.584 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:32.584 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:32.584 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:32.845 [2024-06-10 11:44:01.643392] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:32.845 [2024-06-10 11:44:01.643436] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:32.845 [2024-06-10 11:44:01.643441] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.4TOX5ikiIP 00:35:32.845 request: 00:35:32.845 { 00:35:32.845 "name": "TLSTEST", 00:35:32.845 "trtype": "tcp", 00:35:32.845 "traddr": "10.0.0.2", 00:35:32.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.845 "adrfam": "ipv4", 00:35:32.845 "trsvcid": "4420", 00:35:32.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.845 "psk": "/tmp/tmp.4TOX5ikiIP", 00:35:32.845 "method": "bdev_nvme_attach_controller", 00:35:32.845 "req_id": 1 00:35:32.845 } 00:35:32.845 Got JSON-RPC error response 00:35:32.845 response: 00:35:32.845 { 00:35:32.845 "code": -1, 00:35:32.845 "message": "Operation not permitted" 00:35:32.845 } 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2370162 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2370162 ']' 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2370162 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370162 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370162' 00:35:32.845 killing process with pid 2370162 00:35:32.845 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2370162 00:35:32.846 Received shutdown signal, test time was about 10.000000 seconds 00:35:32.846 00:35:32.846 Latency(us) 00:35:32.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.846 =================================================================================================================== 00:35:32.846 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:32.846 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2370162 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2367786 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2367786 ']' 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2367786 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367786 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367786' 00:35:33.106 killing process with pid 2367786 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2367786 00:35:33.106 [2024-06-10 11:44:01.890079] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:33.106 11:44:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2367786 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2370376 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2370376 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2370376 ']' 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:33.106 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:33.106 [2024-06-10 11:44:02.070165] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:33.106 [2024-06-10 11:44:02.070220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.367 EAL: No free 2048 kB hugepages reported on node 1 00:35:33.367 [2024-06-10 11:44:02.132868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.367 [2024-06-10 11:44:02.195634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.367 [2024-06-10 11:44:02.195675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.367 [2024-06-10 11:44:02.195682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.367 [2024-06-10 11:44:02.195689] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.367 [2024-06-10 11:44:02.195695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.367 [2024-06-10 11:44:02.195712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4TOX5ikiIP 00:35:33.939 11:44:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:34.199 [2024-06-10 11:44:03.030763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:34.199 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:34.461 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:34.461 [2024-06-10 11:44:03.367604] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:34.461 [2024-06-10 11:44:03.367830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.461 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:34.722 malloc0 00:35:34.722 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:34.984 [2024-06-10 11:44:03.843594] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:35:34.984 [2024-06-10 11:44:03.843618] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:35:34.984 [2024-06-10 11:44:03.843645] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:35:34.984 request: 00:35:34.984 { 00:35:34.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.984 "host": "nqn.2016-06.io.spdk:host1", 00:35:34.984 "psk": "/tmp/tmp.4TOX5ikiIP", 00:35:34.984 "method": "nvmf_subsystem_add_host", 00:35:34.984 "req_id": 1 00:35:34.984 } 00:35:34.984 Got JSON-RPC error response 00:35:34.984 response: 00:35:34.984 { 00:35:34.984 "code": -32603, 00:35:34.984 "message": "Internal error" 00:35:34.984 } 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2370376 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2370376 ']' 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2370376 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370376 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370376' 00:35:34.984 killing process with pid 2370376 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2370376 00:35:34.984 11:44:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2370376 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.4TOX5ikiIP 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2370866 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2370866 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2370866 ']' 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:35.246 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:35.246 [2024-06-10 11:44:04.087549] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:35.246 [2024-06-10 11:44:04.087601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.246 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.246 [2024-06-10 11:44:04.149606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.246 [2024-06-10 11:44:04.212569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.246 [2024-06-10 11:44:04.212605] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.246 [2024-06-10 11:44:04.212612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.246 [2024-06-10 11:44:04.212619] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.246 [2024-06-10 11:44:04.212624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.246 [2024-06-10 11:44:04.212646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4TOX5ikiIP 00:35:36.189 11:44:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:36.189 [2024-06-10 11:44:05.043407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.189 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:36.449 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:36.449 [2024-06-10 11:44:05.316093] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:36.449 [2024-06-10 11:44:05.316318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.449 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:36.710 malloc0 00:35:36.710 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:36.711 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:36.973 [2024-06-10 11:44:05.788153] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2371228 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2371228 /var/tmp/bdevperf.sock 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2371228 ']' 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:36.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:36.973 11:44:05 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:36.973 [2024-06-10 11:44:05.859822] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:36.973 [2024-06-10 11:44:05.859873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371228 ] 00:35:36.973 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.973 [2024-06-10 11:44:05.908686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.237 [2024-06-10 11:44:05.960934] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.808 11:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:37.808 11:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:37.808 11:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:38.088 [2024-06-10 11:44:06.845849] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:38.088 [2024-06-10 11:44:06.845908] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:38.088 TLSTESTn1 00:35:38.088 11:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:35:38.386 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:35:38.386 "subsystems": [ 00:35:38.386 { 00:35:38.386 "subsystem": "keyring", 00:35:38.386 "config": [] 00:35:38.386 }, 00:35:38.386 { 00:35:38.386 "subsystem": "iobuf", 00:35:38.386 "config": [ 00:35:38.386 { 00:35:38.386 "method": "iobuf_set_options", 00:35:38.386 "params": { 00:35:38.386 "small_pool_count": 8192, 00:35:38.386 "large_pool_count": 1024, 00:35:38.386 "small_bufsize": 8192, 00:35:38.386 "large_bufsize": 135168 00:35:38.386 } 00:35:38.386 } 00:35:38.386 ] 00:35:38.386 }, 00:35:38.386 { 00:35:38.386 "subsystem": "sock", 00:35:38.386 "config": [ 00:35:38.386 { 00:35:38.386 "method": "sock_set_default_impl", 00:35:38.386 "params": { 00:35:38.386 "impl_name": "posix" 00:35:38.386 } 00:35:38.386 }, 00:35:38.386 { 00:35:38.386 "method": "sock_impl_set_options", 00:35:38.386 "params": { 00:35:38.386 "impl_name": "ssl", 00:35:38.386 "recv_buf_size": 4096, 00:35:38.386 "send_buf_size": 4096, 00:35:38.386 "enable_recv_pipe": true, 00:35:38.386 "enable_quickack": false, 00:35:38.386 "enable_placement_id": 0, 00:35:38.386 "enable_zerocopy_send_server": true, 00:35:38.386 "enable_zerocopy_send_client": false, 00:35:38.386 "zerocopy_threshold": 0, 00:35:38.386 "tls_version": 0, 00:35:38.386 "enable_ktls": false 00:35:38.386 } 00:35:38.386 }, 00:35:38.386 { 00:35:38.386 "method": "sock_impl_set_options", 00:35:38.386 "params": { 00:35:38.386 "impl_name": "posix", 00:35:38.386 "recv_buf_size": 2097152, 00:35:38.386 "send_buf_size": 2097152, 00:35:38.386 "enable_recv_pipe": true, 00:35:38.386 "enable_quickack": false, 00:35:38.386 "enable_placement_id": 0, 00:35:38.386 "enable_zerocopy_send_server": true, 00:35:38.386 "enable_zerocopy_send_client": false, 00:35:38.386 "zerocopy_threshold": 0, 00:35:38.386 "tls_version": 0, 00:35:38.386 "enable_ktls": false 00:35:38.386 } 00:35:38.386 } 00:35:38.387 ] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "vmd", 00:35:38.387 "config": [] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "accel", 00:35:38.387 "config": [ 00:35:38.387 { 00:35:38.387 "method": "accel_set_options", 00:35:38.387 "params": { 00:35:38.387 "small_cache_size": 128, 00:35:38.387 "large_cache_size": 16, 00:35:38.387 "task_count": 2048, 00:35:38.387 "sequence_count": 2048, 00:35:38.387 "buf_count": 2048 00:35:38.387 } 00:35:38.387 } 00:35:38.387 ] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "bdev", 00:35:38.387 "config": [ 00:35:38.387 { 00:35:38.387 "method": "bdev_set_options", 00:35:38.387 "params": { 00:35:38.387 "bdev_io_pool_size": 65535, 00:35:38.387 "bdev_io_cache_size": 256, 00:35:38.387 "bdev_auto_examine": true, 00:35:38.387 "iobuf_small_cache_size": 128, 00:35:38.387 "iobuf_large_cache_size": 16 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_raid_set_options", 00:35:38.387 "params": { 00:35:38.387 "process_window_size_kb": 1024 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_iscsi_set_options", 00:35:38.387 "params": { 00:35:38.387 "timeout_sec": 30 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_nvme_set_options", 00:35:38.387 "params": { 00:35:38.387 "action_on_timeout": "none", 00:35:38.387 "timeout_us": 0, 00:35:38.387 "timeout_admin_us": 0, 00:35:38.387 "keep_alive_timeout_ms": 10000, 00:35:38.387 "arbitration_burst": 0, 00:35:38.387 "low_priority_weight": 0, 00:35:38.387 "medium_priority_weight": 0, 00:35:38.387 "high_priority_weight": 0, 00:35:38.387 "nvme_adminq_poll_period_us": 10000, 00:35:38.387 "nvme_ioq_poll_period_us": 0, 00:35:38.387 "io_queue_requests": 0, 00:35:38.387 "delay_cmd_submit": true, 00:35:38.387 "transport_retry_count": 4, 00:35:38.387 "bdev_retry_count": 3, 00:35:38.387 "transport_ack_timeout": 0, 00:35:38.387 "ctrlr_loss_timeout_sec": 0, 00:35:38.387 "reconnect_delay_sec": 0, 00:35:38.387 "fast_io_fail_timeout_sec": 0, 00:35:38.387 "disable_auto_failback": false, 00:35:38.387 "generate_uuids": false, 00:35:38.387 "transport_tos": 0, 00:35:38.387 "nvme_error_stat": false, 00:35:38.387 "rdma_srq_size": 0, 00:35:38.387 "io_path_stat": false, 00:35:38.387 "allow_accel_sequence": false, 00:35:38.387 "rdma_max_cq_size": 0, 00:35:38.387 "rdma_cm_event_timeout_ms": 0, 00:35:38.387 "dhchap_digests": [ 00:35:38.387 "sha256", 00:35:38.387 "sha384", 00:35:38.387 "sha512" 00:35:38.387 ], 00:35:38.387 "dhchap_dhgroups": [ 00:35:38.387 "null", 00:35:38.387 "ffdhe2048", 00:35:38.387 "ffdhe3072", 00:35:38.387 "ffdhe4096", 00:35:38.387 "ffdhe6144", 00:35:38.387 "ffdhe8192" 00:35:38.387 ] 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_nvme_set_hotplug", 00:35:38.387 "params": { 00:35:38.387 "period_us": 100000, 00:35:38.387 "enable": false 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_malloc_create", 00:35:38.387 "params": { 00:35:38.387 "name": "malloc0", 00:35:38.387 "num_blocks": 8192, 00:35:38.387 "block_size": 4096, 00:35:38.387 "physical_block_size": 4096, 00:35:38.387 "uuid": "b3f1df7d-84d3-4bfe-9b51-ac4c64a4bb40", 00:35:38.387 "optimal_io_boundary": 0 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "bdev_wait_for_examine" 00:35:38.387 } 00:35:38.387 ] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "nbd", 00:35:38.387 "config": [] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "scheduler", 00:35:38.387 "config": [ 00:35:38.387 { 00:35:38.387 "method": "framework_set_scheduler", 00:35:38.387 "params": { 00:35:38.387 "name": "static" 00:35:38.387 } 00:35:38.387 } 00:35:38.387 ] 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "subsystem": "nvmf", 00:35:38.387 "config": [ 00:35:38.387 { 00:35:38.387 "method": "nvmf_set_config", 00:35:38.387 "params": { 00:35:38.387 "discovery_filter": "match_any", 00:35:38.387 "admin_cmd_passthru": { 00:35:38.387 "identify_ctrlr": false 00:35:38.387 } 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_set_max_subsystems", 00:35:38.387 "params": { 00:35:38.387 "max_subsystems": 1024 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_set_crdt", 00:35:38.387 "params": { 00:35:38.387 "crdt1": 0, 00:35:38.387 "crdt2": 0, 00:35:38.387 "crdt3": 0 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_create_transport", 00:35:38.387 "params": { 00:35:38.387 "trtype": "TCP", 00:35:38.387 "max_queue_depth": 128, 00:35:38.387 "max_io_qpairs_per_ctrlr": 127, 00:35:38.387 "in_capsule_data_size": 4096, 00:35:38.387 "max_io_size": 131072, 00:35:38.387 "io_unit_size": 131072, 00:35:38.387 "max_aq_depth": 128, 00:35:38.387 "num_shared_buffers": 511, 00:35:38.387 "buf_cache_size": 4294967295, 00:35:38.387 "dif_insert_or_strip": false, 00:35:38.387 "zcopy": false, 00:35:38.387 "c2h_success": false, 00:35:38.387 "sock_priority": 0, 00:35:38.387 "abort_timeout_sec": 1, 00:35:38.387 "ack_timeout": 0, 00:35:38.387 "data_wr_pool_size": 0 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_create_subsystem", 00:35:38.387 "params": { 00:35:38.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.387 "allow_any_host": false, 00:35:38.387 "serial_number": "SPDK00000000000001", 00:35:38.387 "model_number": "SPDK bdev Controller", 00:35:38.387 "max_namespaces": 10, 00:35:38.387 "min_cntlid": 1, 00:35:38.387 "max_cntlid": 65519, 00:35:38.387 "ana_reporting": false 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_subsystem_add_host", 00:35:38.387 "params": { 00:35:38.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.387 "host": "nqn.2016-06.io.spdk:host1", 00:35:38.387 "psk": "/tmp/tmp.4TOX5ikiIP" 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_subsystem_add_ns", 00:35:38.387 "params": { 00:35:38.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.387 "namespace": { 00:35:38.387 "nsid": 1, 00:35:38.387 "bdev_name": "malloc0", 00:35:38.387 "nguid": "B3F1DF7D84D34BFE9B51AC4C64A4BB40", 00:35:38.387 "uuid": "b3f1df7d-84d3-4bfe-9b51-ac4c64a4bb40", 00:35:38.387 "no_auto_visible": false 00:35:38.387 } 00:35:38.387 } 00:35:38.387 }, 00:35:38.387 { 00:35:38.387 "method": "nvmf_subsystem_add_listener", 00:35:38.387 "params": { 00:35:38.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.387 "listen_address": { 00:35:38.387 "trtype": "TCP", 00:35:38.387 "adrfam": "IPv4", 00:35:38.387 "traddr": "10.0.0.2", 00:35:38.387 "trsvcid": "4420" 00:35:38.387 }, 00:35:38.387 "secure_channel": true 00:35:38.387 } 00:35:38.387 } 00:35:38.387 ] 00:35:38.387 } 00:35:38.387 ] 00:35:38.387 }' 00:35:38.387 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:38.653 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:35:38.653 "subsystems": [ 00:35:38.653 { 00:35:38.653 "subsystem": "keyring", 00:35:38.653 "config": [] 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "subsystem": "iobuf", 00:35:38.653 "config": [ 00:35:38.653 { 00:35:38.653 "method": "iobuf_set_options", 00:35:38.653 "params": { 00:35:38.653 "small_pool_count": 8192, 00:35:38.653 "large_pool_count": 1024, 00:35:38.653 "small_bufsize": 8192, 00:35:38.653 "large_bufsize": 135168 00:35:38.653 } 00:35:38.653 } 00:35:38.653 ] 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "subsystem": "sock", 00:35:38.653 "config": [ 00:35:38.653 { 00:35:38.653 "method": "sock_set_default_impl", 00:35:38.653 "params": { 00:35:38.653 "impl_name": "posix" 00:35:38.653 } 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "method": "sock_impl_set_options", 00:35:38.653 "params": { 00:35:38.653 "impl_name": "ssl", 00:35:38.653 "recv_buf_size": 4096, 00:35:38.653 "send_buf_size": 4096, 00:35:38.653 "enable_recv_pipe": true, 00:35:38.653 "enable_quickack": false, 00:35:38.653 "enable_placement_id": 0, 00:35:38.653 "enable_zerocopy_send_server": true, 00:35:38.653 "enable_zerocopy_send_client": false, 00:35:38.653 "zerocopy_threshold": 0, 00:35:38.653 "tls_version": 0, 00:35:38.653 "enable_ktls": false 00:35:38.653 } 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "method": "sock_impl_set_options", 00:35:38.653 "params": { 00:35:38.653 "impl_name": "posix", 00:35:38.653 "recv_buf_size": 2097152, 00:35:38.653 "send_buf_size": 2097152, 00:35:38.653 "enable_recv_pipe": true, 00:35:38.653 "enable_quickack": false, 00:35:38.653 "enable_placement_id": 0, 00:35:38.653 "enable_zerocopy_send_server": true, 00:35:38.653 "enable_zerocopy_send_client": false, 00:35:38.653 "zerocopy_threshold": 0, 00:35:38.653 "tls_version": 0, 00:35:38.653 "enable_ktls": false 00:35:38.653 } 00:35:38.653 } 00:35:38.653 ] 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "subsystem": "vmd", 00:35:38.653 "config": [] 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "subsystem": "accel", 00:35:38.653 "config": [ 00:35:38.653 { 00:35:38.653 "method": "accel_set_options", 00:35:38.653 "params": { 00:35:38.653 "small_cache_size": 128, 00:35:38.653 "large_cache_size": 16, 00:35:38.653 "task_count": 2048, 00:35:38.653 "sequence_count": 2048, 00:35:38.653 "buf_count": 2048 00:35:38.653 } 00:35:38.653 } 00:35:38.653 ] 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "subsystem": "bdev", 00:35:38.653 "config": [ 00:35:38.653 { 00:35:38.653 "method": "bdev_set_options", 00:35:38.653 "params": { 00:35:38.653 "bdev_io_pool_size": 65535, 00:35:38.653 "bdev_io_cache_size": 256, 00:35:38.653 "bdev_auto_examine": true, 00:35:38.653 "iobuf_small_cache_size": 128, 00:35:38.653 "iobuf_large_cache_size": 16 00:35:38.653 } 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "method": "bdev_raid_set_options", 00:35:38.653 "params": { 00:35:38.653 "process_window_size_kb": 1024 00:35:38.653 } 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "method": "bdev_iscsi_set_options", 00:35:38.653 "params": { 00:35:38.653 "timeout_sec": 30 00:35:38.653 } 00:35:38.653 }, 00:35:38.653 { 00:35:38.653 "method": "bdev_nvme_set_options", 00:35:38.653 "params": { 00:35:38.653 "action_on_timeout": "none", 00:35:38.653 "timeout_us": 0, 00:35:38.653 "timeout_admin_us": 0, 00:35:38.653 "keep_alive_timeout_ms": 10000, 00:35:38.653 "arbitration_burst": 0, 00:35:38.653 "low_priority_weight": 0, 00:35:38.653 "medium_priority_weight": 0, 00:35:38.653 "high_priority_weight": 0, 00:35:38.653 "nvme_adminq_poll_period_us": 10000, 00:35:38.653 "nvme_ioq_poll_period_us": 0, 00:35:38.653 "io_queue_requests": 512, 00:35:38.653 "delay_cmd_submit": true, 00:35:38.653 "transport_retry_count": 4, 00:35:38.653 "bdev_retry_count": 3, 00:35:38.653 "transport_ack_timeout": 0, 00:35:38.653 "ctrlr_loss_timeout_sec": 0, 00:35:38.653 "reconnect_delay_sec": 0, 00:35:38.653 "fast_io_fail_timeout_sec": 0, 00:35:38.654 "disable_auto_failback": false, 00:35:38.654 "generate_uuids": false, 00:35:38.654 "transport_tos": 0, 00:35:38.654 "nvme_error_stat": false, 00:35:38.654 "rdma_srq_size": 0, 00:35:38.654 "io_path_stat": false, 00:35:38.654 "allow_accel_sequence": false, 00:35:38.654 "rdma_max_cq_size": 0, 00:35:38.654 "rdma_cm_event_timeout_ms": 0, 00:35:38.654 "dhchap_digests": [ 00:35:38.654 "sha256", 00:35:38.654 "sha384", 00:35:38.654 "sha512" 00:35:38.654 ], 00:35:38.654 "dhchap_dhgroups": [ 00:35:38.654 "null", 00:35:38.654 "ffdhe2048", 00:35:38.654 "ffdhe3072", 00:35:38.654 "ffdhe4096", 00:35:38.654 "ffdhe6144", 00:35:38.654 "ffdhe8192" 00:35:38.654 ] 00:35:38.654 } 00:35:38.654 }, 00:35:38.654 { 00:35:38.654 "method": "bdev_nvme_attach_controller", 00:35:38.654 "params": { 00:35:38.654 "name": "TLSTEST", 00:35:38.654 "trtype": "TCP", 00:35:38.654 "adrfam": "IPv4", 00:35:38.654 "traddr": "10.0.0.2", 00:35:38.654 "trsvcid": "4420", 00:35:38.654 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.654 "prchk_reftag": false, 00:35:38.654 "prchk_guard": false, 00:35:38.654 "ctrlr_loss_timeout_sec": 0, 00:35:38.654 "reconnect_delay_sec": 0, 00:35:38.654 "fast_io_fail_timeout_sec": 0, 00:35:38.654 "psk": "/tmp/tmp.4TOX5ikiIP", 00:35:38.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.654 "hdgst": false, 00:35:38.654 "ddgst": false 00:35:38.654 } 00:35:38.654 }, 00:35:38.654 { 00:35:38.654 "method": "bdev_nvme_set_hotplug", 00:35:38.654 "params": { 00:35:38.654 "period_us": 100000, 00:35:38.654 "enable": false 00:35:38.654 } 00:35:38.654 }, 00:35:38.654 { 00:35:38.654 "method": "bdev_wait_for_examine" 00:35:38.654 } 00:35:38.654 ] 00:35:38.654 }, 00:35:38.654 { 00:35:38.654 "subsystem": "nbd", 00:35:38.654 "config": [] 00:35:38.654 } 00:35:38.654 ] 00:35:38.654 }' 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2371228 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2371228 ']' 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2371228 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2371228 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2371228' 00:35:38.654 killing process with pid 2371228 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2371228 00:35:38.654 Received shutdown signal, test time was about 10.000000 seconds 00:35:38.654 00:35:38.654 Latency(us) 00:35:38.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.654 =================================================================================================================== 00:35:38.654 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:38.654 [2024-06-10 11:44:07.566209] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:38.654 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2371228 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2370866 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2370866 ']' 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2370866 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370866 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370866' 00:35:38.916 killing process with pid 2370866 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2370866 00:35:38.916 [2024-06-10 11:44:07.729531] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2370866 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:38.916 11:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:35:38.916 "subsystems": [ 00:35:38.916 { 00:35:38.916 "subsystem": "keyring", 00:35:38.916 "config": [] 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "subsystem": "iobuf", 00:35:38.916 "config": [ 00:35:38.916 { 00:35:38.916 "method": "iobuf_set_options", 00:35:38.916 "params": { 00:35:38.916 "small_pool_count": 8192, 00:35:38.916 "large_pool_count": 1024, 00:35:38.916 "small_bufsize": 8192, 00:35:38.916 "large_bufsize": 135168 00:35:38.916 } 00:35:38.916 } 00:35:38.916 ] 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "subsystem": "sock", 00:35:38.916 "config": [ 00:35:38.916 { 00:35:38.916 "method": "sock_set_default_impl", 00:35:38.916 "params": { 00:35:38.916 "impl_name": "posix" 00:35:38.916 } 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "method": "sock_impl_set_options", 00:35:38.916 "params": { 00:35:38.916 "impl_name": "ssl", 00:35:38.916 "recv_buf_size": 4096, 00:35:38.916 "send_buf_size": 4096, 00:35:38.916 "enable_recv_pipe": true, 00:35:38.916 "enable_quickack": false, 00:35:38.916 "enable_placement_id": 0, 00:35:38.916 "enable_zerocopy_send_server": true, 00:35:38.916 "enable_zerocopy_send_client": false, 00:35:38.916 "zerocopy_threshold": 0, 00:35:38.916 "tls_version": 0, 00:35:38.916 "enable_ktls": false 00:35:38.916 } 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "method": "sock_impl_set_options", 00:35:38.916 "params": { 00:35:38.916 "impl_name": "posix", 00:35:38.916 "recv_buf_size": 2097152, 00:35:38.916 "send_buf_size": 2097152, 00:35:38.916 "enable_recv_pipe": true, 00:35:38.916 "enable_quickack": false, 00:35:38.916 "enable_placement_id": 0, 00:35:38.916 "enable_zerocopy_send_server": true, 00:35:38.916 "enable_zerocopy_send_client": false, 00:35:38.916 "zerocopy_threshold": 0, 00:35:38.916 "tls_version": 0, 00:35:38.916 "enable_ktls": false 00:35:38.916 } 00:35:38.916 } 00:35:38.916 ] 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "subsystem": "vmd", 00:35:38.916 "config": [] 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "subsystem": "accel", 00:35:38.916 "config": [ 00:35:38.916 { 00:35:38.916 "method": "accel_set_options", 00:35:38.916 "params": { 00:35:38.916 "small_cache_size": 128, 00:35:38.916 "large_cache_size": 16, 00:35:38.916 "task_count": 2048, 00:35:38.916 "sequence_count": 2048, 00:35:38.916 "buf_count": 2048 00:35:38.916 } 00:35:38.916 } 00:35:38.916 ] 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "subsystem": "bdev", 00:35:38.916 "config": [ 00:35:38.916 { 00:35:38.916 "method": "bdev_set_options", 00:35:38.916 "params": { 00:35:38.916 "bdev_io_pool_size": 65535, 00:35:38.916 "bdev_io_cache_size": 256, 00:35:38.916 "bdev_auto_examine": true, 00:35:38.916 "iobuf_small_cache_size": 128, 00:35:38.916 "iobuf_large_cache_size": 16 00:35:38.916 } 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "method": "bdev_raid_set_options", 00:35:38.916 "params": { 00:35:38.916 "process_window_size_kb": 1024 00:35:38.916 } 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "method": "bdev_iscsi_set_options", 00:35:38.916 "params": { 00:35:38.916 "timeout_sec": 30 00:35:38.916 } 00:35:38.916 }, 00:35:38.916 { 00:35:38.916 "method": "bdev_nvme_set_options", 00:35:38.916 "params": { 00:35:38.916 "action_on_timeout": "none", 00:35:38.916 "timeout_us": 0, 00:35:38.916 "timeout_admin_us": 0, 00:35:38.916 "keep_alive_timeout_ms": 10000, 00:35:38.916 "arbitration_burst": 0, 00:35:38.916 "low_priority_weight": 0, 00:35:38.916 "medium_priority_weight": 0, 00:35:38.916 "high_priority_weight": 0, 00:35:38.916 "nvme_adminq_poll_period_us": 10000, 00:35:38.916 "nvme_ioq_poll_period_us": 0, 00:35:38.917 "io_queue_requests": 0, 00:35:38.917 "delay_cmd_submit": true, 00:35:38.917 "transport_retry_count": 4, 00:35:38.917 "bdev_retry_count": 3, 00:35:38.917 "transport_ack_timeout": 0, 00:35:38.917 "ctrlr_loss_timeout_sec": 0, 00:35:38.917 "reconnect_delay_sec": 0, 00:35:38.917 "fast_io_fail_timeout_sec": 0, 00:35:38.917 "disable_auto_failback": false, 00:35:38.917 "generate_uuids": false, 00:35:38.917 "transport_tos": 0, 00:35:38.917 "nvme_error_stat": false, 00:35:38.917 "rdma_srq_size": 0, 00:35:38.917 "io_path_stat": false, 00:35:38.917 "allow_accel_sequence": false, 00:35:38.917 "rdma_max_cq_size": 0, 00:35:38.917 "rdma_cm_event_timeout_ms": 0, 00:35:38.917 "dhchap_digests": [ 00:35:38.917 "sha256", 00:35:38.917 "sha384", 00:35:38.917 "sha512" 00:35:38.917 ], 00:35:38.917 "dhchap_dhgroups": [ 00:35:38.917 "null", 00:35:38.917 "ffdhe2048", 00:35:38.917 "ffdhe3072", 00:35:38.917 "ffdhe4096", 00:35:38.917 "ffdhe6144", 00:35:38.917 "ffdhe8192" 00:35:38.917 ] 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "bdev_nvme_set_hotplug", 00:35:38.917 "params": { 00:35:38.917 "period_us": 100000, 00:35:38.917 "enable": false 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "bdev_malloc_create", 00:35:38.917 "params": { 00:35:38.917 "name": "malloc0", 00:35:38.917 "num_blocks": 8192, 00:35:38.917 "block_size": 4096, 00:35:38.917 "physical_block_size": 4096, 00:35:38.917 "uuid": "b3f1df7d-84d3-4bfe-9b51-ac4c64a4bb40", 00:35:38.917 "optimal_io_boundary": 0 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "bdev_wait_for_examine" 00:35:38.917 } 00:35:38.917 ] 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "subsystem": "nbd", 00:35:38.917 "config": [] 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "subsystem": "scheduler", 00:35:38.917 "config": [ 00:35:38.917 { 00:35:38.917 "method": "framework_set_scheduler", 00:35:38.917 "params": { 00:35:38.917 "name": "static" 00:35:38.917 } 00:35:38.917 } 00:35:38.917 ] 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "subsystem": "nvmf", 00:35:38.917 "config": [ 00:35:38.917 { 00:35:38.917 "method": "nvmf_set_config", 00:35:38.917 "params": { 00:35:38.917 "discovery_filter": "match_any", 00:35:38.917 "admin_cmd_passthru": { 00:35:38.917 "identify_ctrlr": false 00:35:38.917 } 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_set_max_subsystems", 00:35:38.917 "params": { 00:35:38.917 "max_subsystems": 1024 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_set_crdt", 00:35:38.917 "params": { 00:35:38.917 "crdt1": 0, 00:35:38.917 "crdt2": 0, 00:35:38.917 "crdt3": 0 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_create_transport", 00:35:38.917 "params": { 00:35:38.917 "trtype": "TCP", 00:35:38.917 "max_queue_depth": 128, 00:35:38.917 "max_io_qpairs_per_ctrlr": 127, 00:35:38.917 "in_capsule_data_size": 4096, 00:35:38.917 "max_io_size": 131072, 00:35:38.917 "io_unit_size": 131072, 00:35:38.917 "max_aq_depth": 128, 00:35:38.917 "num_shared_buffers": 511, 00:35:38.917 "buf_cache_size": 4294967295, 00:35:38.917 "dif_insert_or_strip": false, 00:35:38.917 "zcopy": false, 00:35:38.917 "c2h_success": false, 00:35:38.917 "sock_priority": 0, 00:35:38.917 "abort_timeout_sec": 1, 00:35:38.917 "ack_timeout": 0, 00:35:38.917 "data_wr_pool_size": 0 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_create_subsystem", 00:35:38.917 "params": { 00:35:38.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.917 "allow_any_host": false, 00:35:38.917 "serial_number": "SPDK00000000000001", 00:35:38.917 "model_number": "SPDK bdev Controller", 00:35:38.917 "max_namespaces": 10, 00:35:38.917 "min_cntlid": 1, 00:35:38.917 "max_cntlid": 65519, 00:35:38.917 "ana_reporting": false 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_subsystem_add_host", 00:35:38.917 "params": { 00:35:38.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.917 "host": "nqn.2016-06.io.spdk:host1", 00:35:38.917 "psk": "/tmp/tmp.4TOX5ikiIP" 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_subsystem_add_ns", 00:35:38.917 "params": { 00:35:38.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.917 "namespace": { 00:35:38.917 "nsid": 1, 00:35:38.917 "bdev_name": "malloc0", 00:35:38.917 "nguid": "B3F1DF7D84D34BFE9B51AC4C64A4BB40", 00:35:38.917 "uuid": "b3f1df7d-84d3-4bfe-9b51-ac4c64a4bb40", 00:35:38.917 "no_auto_visible": false 00:35:38.917 } 00:35:38.917 } 00:35:38.917 }, 00:35:38.917 { 00:35:38.917 "method": "nvmf_subsystem_add_listener", 00:35:38.917 "params": { 00:35:38.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.917 "listen_address": { 00:35:38.917 "trtype": "TCP", 00:35:38.917 "adrfam": "IPv4", 00:35:38.917 "traddr": "10.0.0.2", 00:35:38.917 "trsvcid": "4420" 00:35:38.917 }, 00:35:38.917 "secure_channel": true 00:35:38.917 } 00:35:38.917 } 00:35:38.917 ] 00:35:38.917 } 00:35:38.917 ] 00:35:38.917 }' 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2371587 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2371587 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2371587 ']' 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:38.917 11:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:39.178 [2024-06-10 11:44:07.907502] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:39.178 [2024-06-10 11:44:07.907554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.178 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.178 [2024-06-10 11:44:07.969501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.178 [2024-06-10 11:44:08.032319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.178 [2024-06-10 11:44:08.032354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.178 [2024-06-10 11:44:08.032362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.178 [2024-06-10 11:44:08.032368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.178 [2024-06-10 11:44:08.032377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.178 [2024-06-10 11:44:08.032434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.439 [2024-06-10 11:44:08.221524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.439 [2024-06-10 11:44:08.237466] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:39.439 [2024-06-10 11:44:08.253522] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:39.439 [2024-06-10 11:44:08.265860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2371725 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2371725 /var/tmp/bdevperf.sock 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2371725 ']' 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:35:40.012 11:44:08 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:35:40.012 "subsystems": [ 00:35:40.012 { 00:35:40.012 "subsystem": "keyring", 00:35:40.012 "config": [] 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "subsystem": "iobuf", 00:35:40.012 "config": [ 00:35:40.012 { 00:35:40.012 "method": "iobuf_set_options", 00:35:40.012 "params": { 00:35:40.012 "small_pool_count": 8192, 00:35:40.012 "large_pool_count": 1024, 00:35:40.012 "small_bufsize": 8192, 00:35:40.012 "large_bufsize": 135168 00:35:40.012 } 00:35:40.012 } 00:35:40.012 ] 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "subsystem": "sock", 00:35:40.012 "config": [ 00:35:40.012 { 00:35:40.012 "method": "sock_set_default_impl", 00:35:40.012 "params": { 00:35:40.012 "impl_name": "posix" 00:35:40.012 } 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "method": "sock_impl_set_options", 00:35:40.012 "params": { 00:35:40.012 "impl_name": "ssl", 00:35:40.012 "recv_buf_size": 4096, 00:35:40.012 "send_buf_size": 4096, 00:35:40.012 "enable_recv_pipe": true, 00:35:40.012 "enable_quickack": false, 00:35:40.012 "enable_placement_id": 0, 00:35:40.012 "enable_zerocopy_send_server": true, 00:35:40.012 "enable_zerocopy_send_client": false, 00:35:40.012 "zerocopy_threshold": 0, 00:35:40.012 "tls_version": 0, 00:35:40.012 "enable_ktls": false 00:35:40.012 } 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "method": "sock_impl_set_options", 00:35:40.012 "params": { 00:35:40.012 "impl_name": "posix", 00:35:40.012 "recv_buf_size": 2097152, 00:35:40.012 "send_buf_size": 2097152, 00:35:40.012 "enable_recv_pipe": true, 00:35:40.012 "enable_quickack": false, 00:35:40.012 "enable_placement_id": 0, 00:35:40.012 "enable_zerocopy_send_server": true, 00:35:40.012 "enable_zerocopy_send_client": false, 00:35:40.012 "zerocopy_threshold": 0, 00:35:40.012 "tls_version": 0, 00:35:40.012 "enable_ktls": false 00:35:40.012 } 00:35:40.012 } 00:35:40.012 ] 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "subsystem": "vmd", 00:35:40.012 "config": [] 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "subsystem": "accel", 00:35:40.012 "config": [ 00:35:40.012 { 00:35:40.012 "method": "accel_set_options", 00:35:40.012 "params": { 00:35:40.012 "small_cache_size": 128, 00:35:40.012 "large_cache_size": 16, 00:35:40.012 "task_count": 2048, 00:35:40.012 "sequence_count": 2048, 00:35:40.012 "buf_count": 2048 00:35:40.012 } 00:35:40.012 } 00:35:40.012 ] 00:35:40.012 }, 00:35:40.012 { 00:35:40.012 "subsystem": "bdev", 00:35:40.012 "config": [ 00:35:40.012 { 00:35:40.013 "method": "bdev_set_options", 00:35:40.013 "params": { 00:35:40.013 "bdev_io_pool_size": 65535, 00:35:40.013 "bdev_io_cache_size": 256, 00:35:40.013 "bdev_auto_examine": true, 00:35:40.013 "iobuf_small_cache_size": 128, 00:35:40.013 "iobuf_large_cache_size": 16 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_raid_set_options", 00:35:40.013 "params": { 00:35:40.013 "process_window_size_kb": 1024 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_iscsi_set_options", 00:35:40.013 "params": { 00:35:40.013 "timeout_sec": 30 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_nvme_set_options", 00:35:40.013 "params": { 00:35:40.013 "action_on_timeout": "none", 00:35:40.013 "timeout_us": 0, 00:35:40.013 "timeout_admin_us": 0, 00:35:40.013 "keep_alive_timeout_ms": 10000, 00:35:40.013 "arbitration_burst": 0, 00:35:40.013 "low_priority_weight": 0, 00:35:40.013 "medium_priority_weight": 0, 00:35:40.013 "high_priority_weight": 0, 00:35:40.013 "nvme_adminq_poll_period_us": 10000, 00:35:40.013 "nvme_ioq_poll_period_us": 0, 00:35:40.013 "io_queue_requests": 512, 00:35:40.013 "delay_cmd_submit": true, 00:35:40.013 "transport_retry_count": 4, 00:35:40.013 "bdev_retry_count": 3, 00:35:40.013 "transport_ack_timeout": 0, 00:35:40.013 "ctrlr_loss_timeout_sec": 0, 00:35:40.013 "reconnect_delay_sec": 0, 00:35:40.013 "fast_io_fail_timeout_sec": 0, 00:35:40.013 "disable_auto_failback": false, 00:35:40.013 "generate_uuids": false, 00:35:40.013 "transport_tos": 0, 00:35:40.013 "nvme_error_stat": false, 00:35:40.013 "rdma_srq_size": 0, 00:35:40.013 "io_path_stat": false, 00:35:40.013 "allow_accel_sequence": false, 00:35:40.013 "rdma_max_cq_size": 0, 00:35:40.013 "rdma_cm_event_timeout_ms": 0, 00:35:40.013 "dhchap_digests": [ 00:35:40.013 "sha256", 00:35:40.013 "sha384", 00:35:40.013 "sha512" 00:35:40.013 ], 00:35:40.013 "dhchap_dhgroups": [ 00:35:40.013 "null", 00:35:40.013 "ffdhe2048", 00:35:40.013 "ffdhe3072", 00:35:40.013 "ffdhe4096", 00:35:40.013 "ffdhe6144", 00:35:40.013 "ffdhe8192" 00:35:40.013 ] 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_nvme_attach_controller", 00:35:40.013 "params": { 00:35:40.013 "name": "TLSTEST", 00:35:40.013 "trtype": "TCP", 00:35:40.013 "adrfam": "IPv4", 00:35:40.013 "traddr": "10.0.0.2", 00:35:40.013 "trsvcid": "4420", 00:35:40.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.013 "prchk_reftag": false, 00:35:40.013 "prchk_guard": false, 00:35:40.013 "ctrlr_loss_timeout_sec": 0, 00:35:40.013 "reconnect_delay_sec": 0, 00:35:40.013 "fast_io_fail_timeout_sec": 0, 00:35:40.013 "psk": "/tmp/tmp.4TOX5ikiIP", 00:35:40.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.013 "hdgst": false, 00:35:40.013 "ddgst": false 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_nvme_set_hotplug", 00:35:40.013 "params": { 00:35:40.013 "period_us": 100000, 00:35:40.013 "enable": false 00:35:40.013 } 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "method": "bdev_wait_for_examine" 00:35:40.013 } 00:35:40.013 ] 00:35:40.013 }, 00:35:40.013 { 00:35:40.013 "subsystem": "nbd", 00:35:40.013 "config": [] 00:35:40.013 } 00:35:40.013 ] 00:35:40.013 }' 00:35:40.013 [2024-06-10 11:44:08.798780] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:40.013 [2024-06-10 11:44:08.798849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371725 ] 00:35:40.013 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.013 [2024-06-10 11:44:08.848615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.013 [2024-06-10 11:44:08.900819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.274 [2024-06-10 11:44:09.025583] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:40.274 [2024-06-10 11:44:09.025644] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:35:40.845 11:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:40.845 11:44:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:40.845 11:44:09 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:40.845 Running I/O for 10 seconds... 00:35:50.854 00:35:50.854 Latency(us) 00:35:50.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:50.854 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:50.854 Verification LBA range: start 0x0 length 0x2000 00:35:50.854 TLSTESTn1 : 10.03 2310.24 9.02 0.00 0.00 55321.38 5734.40 92187.31 00:35:50.854 =================================================================================================================== 00:35:50.854 Total : 2310.24 9.02 0.00 0.00 55321.38 5734.40 92187.31 00:35:50.854 0 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2371725 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2371725 ']' 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2371725 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:50.854 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2371725 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2371725' 00:35:51.116 killing process with pid 2371725 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2371725 00:35:51.116 Received shutdown signal, test time was about 10.000000 seconds 00:35:51.116 00:35:51.116 Latency(us) 00:35:51.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.116 =================================================================================================================== 00:35:51.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.116 [2024-06-10 11:44:19.870889] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2371725 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2371587 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2371587 ']' 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2371587 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:51.116 11:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2371587 00:35:51.116 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:51.116 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:51.116 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2371587' 00:35:51.116 killing process with pid 2371587 00:35:51.116 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2371587 00:35:51.116 [2024-06-10 11:44:20.037082] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:51.116 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2371587 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2373962 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2373962 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2373962 ']' 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:51.377 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:51.377 [2024-06-10 11:44:20.217310] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:51.377 [2024-06-10 11:44:20.217362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.377 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.377 [2024-06-10 11:44:20.279728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.377 [2024-06-10 11:44:20.343613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.377 [2024-06-10 11:44:20.343648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.377 [2024-06-10 11:44:20.343656] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.377 [2024-06-10 11:44:20.343662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.377 [2024-06-10 11:44:20.343668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.377 [2024-06-10 11:44:20.343689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.4TOX5ikiIP 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4TOX5ikiIP 00:35:51.637 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:51.897 [2024-06-10 11:44:20.657212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.897 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:52.157 11:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:35:52.157 [2024-06-10 11:44:21.058227] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:52.157 [2024-06-10 11:44:21.058444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.157 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:52.417 malloc0 00:35:52.417 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:52.678 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4TOX5ikiIP 00:35:52.678 [2024-06-10 11:44:21.642510] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2374319 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2374319 /var/tmp/bdevperf.sock 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2374319 ']' 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:52.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:52.938 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:52.938 [2024-06-10 11:44:21.709788] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:52.938 [2024-06-10 11:44:21.709845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374319 ] 00:35:52.938 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.938 [2024-06-10 11:44:21.767467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.938 [2024-06-10 11:44:21.831378] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.199 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:53.199 11:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:53.199 11:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4TOX5ikiIP 00:35:53.199 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:53.460 [2024-06-10 11:44:22.272687] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:53.460 nvme0n1 00:35:53.460 11:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:53.720 Running I/O for 1 seconds... 00:35:54.665 00:35:54.665 Latency(us) 00:35:54.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.665 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:54.665 Verification LBA range: start 0x0 length 0x2000 00:35:54.665 nvme0n1 : 1.04 2973.00 11.61 0.00 0.00 42373.58 6662.83 75584.85 00:35:54.665 =================================================================================================================== 00:35:54.665 Total : 2973.00 11.61 0.00 0.00 42373.58 6662.83 75584.85 00:35:54.665 0 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2374319 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2374319 ']' 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2374319 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2374319 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2374319' 00:35:54.665 killing process with pid 2374319 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2374319 00:35:54.665 Received shutdown signal, test time was about 1.000000 seconds 00:35:54.665 00:35:54.665 Latency(us) 00:35:54.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.665 =================================================================================================================== 00:35:54.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.665 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2374319 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2373962 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2373962 ']' 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2373962 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2373962 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2373962' 00:35:54.926 killing process with pid 2373962 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2373962 00:35:54.926 [2024-06-10 11:44:23.778393] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:35:54.926 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2373962 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2374673 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2374673 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2374673 ']' 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:55.187 11:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:55.187 [2024-06-10 11:44:23.981111] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:55.187 [2024-06-10 11:44:23.981163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:55.187 EAL: No free 2048 kB hugepages reported on node 1 00:35:55.187 [2024-06-10 11:44:24.046084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.187 [2024-06-10 11:44:24.109342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:55.187 [2024-06-10 11:44:24.109381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:55.187 [2024-06-10 11:44:24.109390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:55.187 [2024-06-10 11:44:24.109398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:55.187 [2024-06-10 11:44:24.109405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:55.187 [2024-06-10 11:44:24.109424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:56.129 [2024-06-10 11:44:24.836071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.129 malloc0 00:35:56.129 [2024-06-10 11:44:24.862823] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:56.129 [2024-06-10 11:44:24.863041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2375021 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2375021 /var/tmp/bdevperf.sock 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2375021 ']' 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:56.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:56.129 11:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:56.129 [2024-06-10 11:44:24.947612] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:56.129 [2024-06-10 11:44:24.947664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375021 ] 00:35:56.129 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.129 [2024-06-10 11:44:25.006573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.129 [2024-06-10 11:44:25.070433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.389 11:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:56.389 11:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:56.389 11:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4TOX5ikiIP 00:35:56.650 11:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:56.650 [2024-06-10 11:44:25.560019] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:56.910 nvme0n1 00:35:56.910 11:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:56.910 Running I/O for 1 seconds... 00:35:57.852 00:35:57.852 Latency(us) 00:35:57.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:57.852 Verification LBA range: start 0x0 length 0x2000 00:35:57.852 nvme0n1 : 1.02 3816.73 14.91 0.00 0.00 33188.01 8410.45 54831.79 00:35:57.852 =================================================================================================================== 00:35:57.852 Total : 3816.73 14.91 0.00 0.00 33188.01 8410.45 54831.79 00:35:57.852 0 00:35:57.852 11:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:35:57.852 11:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:57.852 11:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:58.114 11:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:58.114 11:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:35:58.114 "subsystems": [ 00:35:58.114 { 00:35:58.114 "subsystem": "keyring", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "keyring_file_add_key", 00:35:58.114 "params": { 00:35:58.114 "name": "key0", 00:35:58.114 "path": "/tmp/tmp.4TOX5ikiIP" 00:35:58.114 } 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "iobuf", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "iobuf_set_options", 00:35:58.114 "params": { 00:35:58.114 "small_pool_count": 8192, 00:35:58.114 "large_pool_count": 1024, 00:35:58.114 "small_bufsize": 8192, 00:35:58.114 "large_bufsize": 135168 00:35:58.114 } 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "sock", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "sock_set_default_impl", 00:35:58.114 "params": { 00:35:58.114 "impl_name": "posix" 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "sock_impl_set_options", 00:35:58.114 "params": { 00:35:58.114 "impl_name": "ssl", 00:35:58.114 "recv_buf_size": 4096, 00:35:58.114 "send_buf_size": 4096, 00:35:58.114 "enable_recv_pipe": true, 00:35:58.114 "enable_quickack": false, 00:35:58.114 "enable_placement_id": 0, 00:35:58.114 "enable_zerocopy_send_server": true, 00:35:58.114 "enable_zerocopy_send_client": false, 00:35:58.114 "zerocopy_threshold": 0, 00:35:58.114 "tls_version": 0, 00:35:58.114 "enable_ktls": false 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "sock_impl_set_options", 00:35:58.114 "params": { 00:35:58.114 "impl_name": "posix", 00:35:58.114 "recv_buf_size": 2097152, 00:35:58.114 "send_buf_size": 2097152, 00:35:58.114 "enable_recv_pipe": true, 00:35:58.114 "enable_quickack": false, 00:35:58.114 "enable_placement_id": 0, 00:35:58.114 "enable_zerocopy_send_server": true, 00:35:58.114 "enable_zerocopy_send_client": false, 00:35:58.114 "zerocopy_threshold": 0, 00:35:58.114 "tls_version": 0, 00:35:58.114 "enable_ktls": false 00:35:58.114 } 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "vmd", 00:35:58.114 "config": [] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "accel", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "accel_set_options", 00:35:58.114 "params": { 00:35:58.114 "small_cache_size": 128, 00:35:58.114 "large_cache_size": 16, 00:35:58.114 "task_count": 2048, 00:35:58.114 "sequence_count": 2048, 00:35:58.114 "buf_count": 2048 00:35:58.114 } 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "bdev", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "bdev_set_options", 00:35:58.114 "params": { 00:35:58.114 "bdev_io_pool_size": 65535, 00:35:58.114 "bdev_io_cache_size": 256, 00:35:58.114 "bdev_auto_examine": true, 00:35:58.114 "iobuf_small_cache_size": 128, 00:35:58.114 "iobuf_large_cache_size": 16 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_raid_set_options", 00:35:58.114 "params": { 00:35:58.114 "process_window_size_kb": 1024 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_iscsi_set_options", 00:35:58.114 "params": { 00:35:58.114 "timeout_sec": 30 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_nvme_set_options", 00:35:58.114 "params": { 00:35:58.114 "action_on_timeout": "none", 00:35:58.114 "timeout_us": 0, 00:35:58.114 "timeout_admin_us": 0, 00:35:58.114 "keep_alive_timeout_ms": 10000, 00:35:58.114 "arbitration_burst": 0, 00:35:58.114 "low_priority_weight": 0, 00:35:58.114 "medium_priority_weight": 0, 00:35:58.114 "high_priority_weight": 0, 00:35:58.114 "nvme_adminq_poll_period_us": 10000, 00:35:58.114 "nvme_ioq_poll_period_us": 0, 00:35:58.114 "io_queue_requests": 0, 00:35:58.114 "delay_cmd_submit": true, 00:35:58.114 "transport_retry_count": 4, 00:35:58.114 "bdev_retry_count": 3, 00:35:58.114 "transport_ack_timeout": 0, 00:35:58.114 "ctrlr_loss_timeout_sec": 0, 00:35:58.114 "reconnect_delay_sec": 0, 00:35:58.114 "fast_io_fail_timeout_sec": 0, 00:35:58.114 "disable_auto_failback": false, 00:35:58.114 "generate_uuids": false, 00:35:58.114 "transport_tos": 0, 00:35:58.114 "nvme_error_stat": false, 00:35:58.114 "rdma_srq_size": 0, 00:35:58.114 "io_path_stat": false, 00:35:58.114 "allow_accel_sequence": false, 00:35:58.114 "rdma_max_cq_size": 0, 00:35:58.114 "rdma_cm_event_timeout_ms": 0, 00:35:58.114 "dhchap_digests": [ 00:35:58.114 "sha256", 00:35:58.114 "sha384", 00:35:58.114 "sha512" 00:35:58.114 ], 00:35:58.114 "dhchap_dhgroups": [ 00:35:58.114 "null", 00:35:58.114 "ffdhe2048", 00:35:58.114 "ffdhe3072", 00:35:58.114 "ffdhe4096", 00:35:58.114 "ffdhe6144", 00:35:58.114 "ffdhe8192" 00:35:58.114 ] 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_nvme_set_hotplug", 00:35:58.114 "params": { 00:35:58.114 "period_us": 100000, 00:35:58.114 "enable": false 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_malloc_create", 00:35:58.114 "params": { 00:35:58.114 "name": "malloc0", 00:35:58.114 "num_blocks": 8192, 00:35:58.114 "block_size": 4096, 00:35:58.114 "physical_block_size": 4096, 00:35:58.114 "uuid": "c353284c-24f2-4f73-9604-6e0f42970bcb", 00:35:58.114 "optimal_io_boundary": 0 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "method": "bdev_wait_for_examine" 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "nbd", 00:35:58.114 "config": [] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "scheduler", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "framework_set_scheduler", 00:35:58.114 "params": { 00:35:58.114 "name": "static" 00:35:58.114 } 00:35:58.114 } 00:35:58.114 ] 00:35:58.114 }, 00:35:58.114 { 00:35:58.114 "subsystem": "nvmf", 00:35:58.114 "config": [ 00:35:58.114 { 00:35:58.114 "method": "nvmf_set_config", 00:35:58.114 "params": { 00:35:58.114 "discovery_filter": "match_any", 00:35:58.114 "admin_cmd_passthru": { 00:35:58.114 "identify_ctrlr": false 00:35:58.114 } 00:35:58.114 } 00:35:58.114 }, 00:35:58.114 { 00:35:58.115 "method": "nvmf_set_max_subsystems", 00:35:58.115 "params": { 00:35:58.115 "max_subsystems": 1024 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_set_crdt", 00:35:58.115 "params": { 00:35:58.115 "crdt1": 0, 00:35:58.115 "crdt2": 0, 00:35:58.115 "crdt3": 0 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_create_transport", 00:35:58.115 "params": { 00:35:58.115 "trtype": "TCP", 00:35:58.115 "max_queue_depth": 128, 00:35:58.115 "max_io_qpairs_per_ctrlr": 127, 00:35:58.115 "in_capsule_data_size": 4096, 00:35:58.115 "max_io_size": 131072, 00:35:58.115 "io_unit_size": 131072, 00:35:58.115 "max_aq_depth": 128, 00:35:58.115 "num_shared_buffers": 511, 00:35:58.115 "buf_cache_size": 4294967295, 00:35:58.115 "dif_insert_or_strip": false, 00:35:58.115 "zcopy": false, 00:35:58.115 "c2h_success": false, 00:35:58.115 "sock_priority": 0, 00:35:58.115 "abort_timeout_sec": 1, 00:35:58.115 "ack_timeout": 0, 00:35:58.115 "data_wr_pool_size": 0 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_create_subsystem", 00:35:58.115 "params": { 00:35:58.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.115 "allow_any_host": false, 00:35:58.115 "serial_number": "00000000000000000000", 00:35:58.115 "model_number": "SPDK bdev Controller", 00:35:58.115 "max_namespaces": 32, 00:35:58.115 "min_cntlid": 1, 00:35:58.115 "max_cntlid": 65519, 00:35:58.115 "ana_reporting": false 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_subsystem_add_host", 00:35:58.115 "params": { 00:35:58.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.115 "host": "nqn.2016-06.io.spdk:host1", 00:35:58.115 "psk": "key0" 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_subsystem_add_ns", 00:35:58.115 "params": { 00:35:58.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.115 "namespace": { 00:35:58.115 "nsid": 1, 00:35:58.115 "bdev_name": "malloc0", 00:35:58.115 "nguid": "C353284C24F24F7396046E0F42970BCB", 00:35:58.115 "uuid": "c353284c-24f2-4f73-9604-6e0f42970bcb", 00:35:58.115 "no_auto_visible": false 00:35:58.115 } 00:35:58.115 } 00:35:58.115 }, 00:35:58.115 { 00:35:58.115 "method": "nvmf_subsystem_add_listener", 00:35:58.115 "params": { 00:35:58.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.115 "listen_address": { 00:35:58.115 "trtype": "TCP", 00:35:58.115 "adrfam": "IPv4", 00:35:58.115 "traddr": "10.0.0.2", 00:35:58.115 "trsvcid": "4420" 00:35:58.115 }, 00:35:58.115 "secure_channel": true 00:35:58.115 } 00:35:58.115 } 00:35:58.115 ] 00:35:58.115 } 00:35:58.115 ] 00:35:58.115 }' 00:35:58.115 11:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:58.376 11:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:35:58.376 "subsystems": [ 00:35:58.376 { 00:35:58.376 "subsystem": "keyring", 00:35:58.376 "config": [ 00:35:58.376 { 00:35:58.376 "method": "keyring_file_add_key", 00:35:58.376 "params": { 00:35:58.376 "name": "key0", 00:35:58.376 "path": "/tmp/tmp.4TOX5ikiIP" 00:35:58.376 } 00:35:58.376 } 00:35:58.376 ] 00:35:58.376 }, 00:35:58.376 { 00:35:58.376 "subsystem": "iobuf", 00:35:58.376 "config": [ 00:35:58.376 { 00:35:58.376 "method": "iobuf_set_options", 00:35:58.376 "params": { 00:35:58.376 "small_pool_count": 8192, 00:35:58.376 "large_pool_count": 1024, 00:35:58.376 "small_bufsize": 8192, 00:35:58.376 "large_bufsize": 135168 00:35:58.376 } 00:35:58.376 } 00:35:58.376 ] 00:35:58.376 }, 00:35:58.376 { 00:35:58.376 "subsystem": "sock", 00:35:58.376 "config": [ 00:35:58.376 { 00:35:58.376 "method": "sock_set_default_impl", 00:35:58.376 "params": { 00:35:58.376 "impl_name": "posix" 00:35:58.376 } 00:35:58.376 }, 00:35:58.376 { 00:35:58.376 "method": "sock_impl_set_options", 00:35:58.376 "params": { 00:35:58.376 "impl_name": "ssl", 00:35:58.376 "recv_buf_size": 4096, 00:35:58.376 "send_buf_size": 4096, 00:35:58.376 "enable_recv_pipe": true, 00:35:58.376 "enable_quickack": false, 00:35:58.376 "enable_placement_id": 0, 00:35:58.376 "enable_zerocopy_send_server": true, 00:35:58.376 "enable_zerocopy_send_client": false, 00:35:58.376 "zerocopy_threshold": 0, 00:35:58.376 "tls_version": 0, 00:35:58.376 "enable_ktls": false 00:35:58.376 } 00:35:58.376 }, 00:35:58.377 { 00:35:58.377 "method": "sock_impl_set_options", 00:35:58.377 "params": { 00:35:58.377 "impl_name": "posix", 00:35:58.377 "recv_buf_size": 2097152, 00:35:58.377 "send_buf_size": 2097152, 00:35:58.377 "enable_recv_pipe": true, 00:35:58.377 "enable_quickack": false, 00:35:58.377 "enable_placement_id": 0, 00:35:58.377 "enable_zerocopy_send_server": true, 00:35:58.377 "enable_zerocopy_send_client": false, 00:35:58.377 "zerocopy_threshold": 0, 00:35:58.377 "tls_version": 0, 00:35:58.377 "enable_ktls": false 00:35:58.377 } 00:35:58.377 } 00:35:58.377 ] 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "subsystem": "vmd", 00:35:58.377 "config": [] 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "subsystem": "accel", 00:35:58.377 "config": [ 00:35:58.377 { 00:35:58.377 "method": "accel_set_options", 00:35:58.377 "params": { 00:35:58.377 "small_cache_size": 128, 00:35:58.377 "large_cache_size": 16, 00:35:58.377 "task_count": 2048, 00:35:58.377 "sequence_count": 2048, 00:35:58.377 "buf_count": 2048 00:35:58.377 } 00:35:58.377 } 00:35:58.377 ] 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "subsystem": "bdev", 00:35:58.377 "config": [ 00:35:58.377 { 00:35:58.377 "method": "bdev_set_options", 00:35:58.377 "params": { 00:35:58.377 "bdev_io_pool_size": 65535, 00:35:58.377 "bdev_io_cache_size": 256, 00:35:58.377 "bdev_auto_examine": true, 00:35:58.377 "iobuf_small_cache_size": 128, 00:35:58.377 "iobuf_large_cache_size": 16 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_raid_set_options", 00:35:58.377 "params": { 00:35:58.377 "process_window_size_kb": 1024 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_iscsi_set_options", 00:35:58.377 "params": { 00:35:58.377 "timeout_sec": 30 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_nvme_set_options", 00:35:58.377 "params": { 00:35:58.377 "action_on_timeout": "none", 00:35:58.377 "timeout_us": 0, 00:35:58.377 "timeout_admin_us": 0, 00:35:58.377 "keep_alive_timeout_ms": 10000, 00:35:58.377 "arbitration_burst": 0, 00:35:58.377 "low_priority_weight": 0, 00:35:58.377 "medium_priority_weight": 0, 00:35:58.377 "high_priority_weight": 0, 00:35:58.377 "nvme_adminq_poll_period_us": 10000, 00:35:58.377 "nvme_ioq_poll_period_us": 0, 00:35:58.377 "io_queue_requests": 512, 00:35:58.377 "delay_cmd_submit": true, 00:35:58.377 "transport_retry_count": 4, 00:35:58.377 "bdev_retry_count": 3, 00:35:58.377 "transport_ack_timeout": 0, 00:35:58.377 "ctrlr_loss_timeout_sec": 0, 00:35:58.377 "reconnect_delay_sec": 0, 00:35:58.377 "fast_io_fail_timeout_sec": 0, 00:35:58.377 "disable_auto_failback": false, 00:35:58.377 "generate_uuids": false, 00:35:58.377 "transport_tos": 0, 00:35:58.377 "nvme_error_stat": false, 00:35:58.377 "rdma_srq_size": 0, 00:35:58.377 "io_path_stat": false, 00:35:58.377 "allow_accel_sequence": false, 00:35:58.377 "rdma_max_cq_size": 0, 00:35:58.377 "rdma_cm_event_timeout_ms": 0, 00:35:58.377 "dhchap_digests": [ 00:35:58.377 "sha256", 00:35:58.377 "sha384", 00:35:58.377 "sha512" 00:35:58.377 ], 00:35:58.377 "dhchap_dhgroups": [ 00:35:58.377 "null", 00:35:58.377 "ffdhe2048", 00:35:58.377 "ffdhe3072", 00:35:58.377 "ffdhe4096", 00:35:58.377 "ffdhe6144", 00:35:58.377 "ffdhe8192" 00:35:58.377 ] 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_nvme_attach_controller", 00:35:58.377 "params": { 00:35:58.377 "name": "nvme0", 00:35:58.377 "trtype": "TCP", 00:35:58.377 "adrfam": "IPv4", 00:35:58.377 "traddr": "10.0.0.2", 00:35:58.377 "trsvcid": "4420", 00:35:58.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.377 "prchk_reftag": false, 00:35:58.377 "prchk_guard": false, 00:35:58.377 "ctrlr_loss_timeout_sec": 0, 00:35:58.377 "reconnect_delay_sec": 0, 00:35:58.377 "fast_io_fail_timeout_sec": 0, 00:35:58.377 "psk": "key0", 00:35:58.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.377 "hdgst": false, 00:35:58.377 "ddgst": false 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_nvme_set_hotplug", 00:35:58.377 "params": { 00:35:58.377 "period_us": 100000, 00:35:58.377 "enable": false 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_enable_histogram", 00:35:58.377 "params": { 00:35:58.377 "name": "nvme0n1", 00:35:58.377 "enable": true 00:35:58.377 } 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "method": "bdev_wait_for_examine" 00:35:58.377 } 00:35:58.377 ] 00:35:58.377 }, 00:35:58.377 { 00:35:58.377 "subsystem": "nbd", 00:35:58.377 "config": [] 00:35:58.377 } 00:35:58.377 ] 00:35:58.377 }' 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2375021 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2375021 ']' 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2375021 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2375021 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2375021' 00:35:58.377 killing process with pid 2375021 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2375021 00:35:58.377 Received shutdown signal, test time was about 1.000000 seconds 00:35:58.377 00:35:58.377 Latency(us) 00:35:58.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.377 =================================================================================================================== 00:35:58.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.377 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2375021 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2374673 ']' 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2374673' 00:35:58.639 killing process with pid 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2374673 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:58.639 11:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:35:58.639 "subsystems": [ 00:35:58.639 { 00:35:58.639 "subsystem": "keyring", 00:35:58.639 "config": [ 00:35:58.639 { 00:35:58.639 "method": "keyring_file_add_key", 00:35:58.639 "params": { 00:35:58.639 "name": "key0", 00:35:58.639 "path": "/tmp/tmp.4TOX5ikiIP" 00:35:58.639 } 00:35:58.639 } 00:35:58.639 ] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "iobuf", 00:35:58.639 "config": [ 00:35:58.639 { 00:35:58.639 "method": "iobuf_set_options", 00:35:58.639 "params": { 00:35:58.639 "small_pool_count": 8192, 00:35:58.639 "large_pool_count": 1024, 00:35:58.639 "small_bufsize": 8192, 00:35:58.639 "large_bufsize": 135168 00:35:58.639 } 00:35:58.639 } 00:35:58.639 ] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "sock", 00:35:58.639 "config": [ 00:35:58.639 { 00:35:58.639 "method": "sock_set_default_impl", 00:35:58.639 "params": { 00:35:58.639 "impl_name": "posix" 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "sock_impl_set_options", 00:35:58.639 "params": { 00:35:58.639 "impl_name": "ssl", 00:35:58.639 "recv_buf_size": 4096, 00:35:58.639 "send_buf_size": 4096, 00:35:58.639 "enable_recv_pipe": true, 00:35:58.639 "enable_quickack": false, 00:35:58.639 "enable_placement_id": 0, 00:35:58.639 "enable_zerocopy_send_server": true, 00:35:58.639 "enable_zerocopy_send_client": false, 00:35:58.639 "zerocopy_threshold": 0, 00:35:58.639 "tls_version": 0, 00:35:58.639 "enable_ktls": false 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "sock_impl_set_options", 00:35:58.639 "params": { 00:35:58.639 "impl_name": "posix", 00:35:58.639 "recv_buf_size": 2097152, 00:35:58.639 "send_buf_size": 2097152, 00:35:58.639 "enable_recv_pipe": true, 00:35:58.639 "enable_quickack": false, 00:35:58.639 "enable_placement_id": 0, 00:35:58.639 "enable_zerocopy_send_server": true, 00:35:58.639 "enable_zerocopy_send_client": false, 00:35:58.639 "zerocopy_threshold": 0, 00:35:58.639 "tls_version": 0, 00:35:58.639 "enable_ktls": false 00:35:58.639 } 00:35:58.639 } 00:35:58.639 ] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "vmd", 00:35:58.639 "config": [] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "accel", 00:35:58.639 "config": [ 00:35:58.639 { 00:35:58.639 "method": "accel_set_options", 00:35:58.639 "params": { 00:35:58.639 "small_cache_size": 128, 00:35:58.639 "large_cache_size": 16, 00:35:58.639 "task_count": 2048, 00:35:58.639 "sequence_count": 2048, 00:35:58.639 "buf_count": 2048 00:35:58.639 } 00:35:58.639 } 00:35:58.639 ] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "bdev", 00:35:58.639 "config": [ 00:35:58.639 { 00:35:58.639 "method": "bdev_set_options", 00:35:58.639 "params": { 00:35:58.639 "bdev_io_pool_size": 65535, 00:35:58.639 "bdev_io_cache_size": 256, 00:35:58.639 "bdev_auto_examine": true, 00:35:58.639 "iobuf_small_cache_size": 128, 00:35:58.639 "iobuf_large_cache_size": 16 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_raid_set_options", 00:35:58.639 "params": { 00:35:58.639 "process_window_size_kb": 1024 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_iscsi_set_options", 00:35:58.639 "params": { 00:35:58.639 "timeout_sec": 30 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_nvme_set_options", 00:35:58.639 "params": { 00:35:58.639 "action_on_timeout": "none", 00:35:58.639 "timeout_us": 0, 00:35:58.639 "timeout_admin_us": 0, 00:35:58.639 "keep_alive_timeout_ms": 10000, 00:35:58.639 "arbitration_burst": 0, 00:35:58.639 "low_priority_weight": 0, 00:35:58.639 "medium_priority_weight": 0, 00:35:58.639 "high_priority_weight": 0, 00:35:58.639 "nvme_adminq_poll_period_us": 10000, 00:35:58.639 "nvme_ioq_poll_period_us": 0, 00:35:58.639 "io_queue_requests": 0, 00:35:58.639 "delay_cmd_submit": true, 00:35:58.639 "transport_retry_count": 4, 00:35:58.639 "bdev_retry_count": 3, 00:35:58.639 "transport_ack_timeout": 0, 00:35:58.639 "ctrlr_loss_timeout_sec": 0, 00:35:58.639 "reconnect_delay_sec": 0, 00:35:58.639 "fast_io_fail_timeout_sec": 0, 00:35:58.639 "disable_auto_failback": false, 00:35:58.639 "generate_uuids": false, 00:35:58.639 "transport_tos": 0, 00:35:58.639 "nvme_error_stat": false, 00:35:58.639 "rdma_srq_size": 0, 00:35:58.639 "io_path_stat": false, 00:35:58.639 "allow_accel_sequence": false, 00:35:58.639 "rdma_max_cq_size": 0, 00:35:58.639 "rdma_cm_event_timeout_ms": 0, 00:35:58.639 "dhchap_digests": [ 00:35:58.639 "sha256", 00:35:58.639 "sha384", 00:35:58.639 "sha512" 00:35:58.639 ], 00:35:58.639 "dhchap_dhgroups": [ 00:35:58.639 "null", 00:35:58.639 "ffdhe2048", 00:35:58.639 "ffdhe3072", 00:35:58.639 "ffdhe4096", 00:35:58.639 "ffdhe6144", 00:35:58.639 "ffdhe8192" 00:35:58.639 ] 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_nvme_set_hotplug", 00:35:58.639 "params": { 00:35:58.639 "period_us": 100000, 00:35:58.639 "enable": false 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_malloc_create", 00:35:58.639 "params": { 00:35:58.639 "name": "malloc0", 00:35:58.639 "num_blocks": 8192, 00:35:58.639 "block_size": 4096, 00:35:58.639 "physical_block_size": 4096, 00:35:58.639 "uuid": "c353284c-24f2-4f73-9604-6e0f42970bcb", 00:35:58.639 "optimal_io_boundary": 0 00:35:58.639 } 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "method": "bdev_wait_for_examine" 00:35:58.639 } 00:35:58.639 ] 00:35:58.639 }, 00:35:58.639 { 00:35:58.639 "subsystem": "nbd", 00:35:58.639 "config": [] 00:35:58.639 }, 00:35:58.639 { 00:35:58.640 "subsystem": "scheduler", 00:35:58.640 "config": [ 00:35:58.640 { 00:35:58.640 "method": "framework_set_scheduler", 00:35:58.640 "params": { 00:35:58.640 "name": "static" 00:35:58.640 } 00:35:58.640 } 00:35:58.640 ] 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "subsystem": "nvmf", 00:35:58.640 "config": [ 00:35:58.640 { 00:35:58.640 "method": "nvmf_set_config", 00:35:58.640 "params": { 00:35:58.640 "discovery_filter": "match_any", 00:35:58.640 "admin_cmd_passthru": { 00:35:58.640 "identify_ctrlr": false 00:35:58.640 } 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_set_max_subsystems", 00:35:58.640 "params": { 00:35:58.640 "max_subsystems": 1024 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_set_crdt", 00:35:58.640 "params": { 00:35:58.640 "crdt1": 0, 00:35:58.640 "crdt2": 0, 00:35:58.640 "crdt3": 0 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_create_transport", 00:35:58.640 "params": { 00:35:58.640 "trtype": "TCP", 00:35:58.640 "max_queue_depth": 128, 00:35:58.640 "max_io_qpairs_per_ctrlr": 127, 00:35:58.640 "in_capsule_data_size": 4096, 00:35:58.640 "max_io_size": 131072, 00:35:58.640 "io_unit_size": 131072, 00:35:58.640 "max_aq_depth": 128, 00:35:58.640 "num_shared_buffers": 511, 00:35:58.640 "buf_cache_size": 4294967295, 00:35:58.640 "dif_insert_or_strip": false, 00:35:58.640 "zcopy": false, 00:35:58.640 "c2h_success": false, 00:35:58.640 "sock_priority": 0, 00:35:58.640 "abort_timeout_sec": 1, 00:35:58.640 "ack_timeout": 0, 00:35:58.640 "data_wr_pool_size": 0 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_create_subsystem", 00:35:58.640 "params": { 00:35:58.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.640 "allow_any_host": false, 00:35:58.640 "serial_number": "00000000000000000000", 00:35:58.640 "model_number": "SPDK bdev Controller", 00:35:58.640 "max_namespaces": 32, 00:35:58.640 "min_cntlid": 1, 00:35:58.640 "max_cntlid": 65519, 00:35:58.640 "ana_reporting": false 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_subsystem_add_host", 00:35:58.640 "params": { 00:35:58.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.640 "host": "nqn.2016-06.io.spdk:host1", 00:35:58.640 "psk": "key0" 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_subsystem_add_ns", 00:35:58.640 "params": { 00:35:58.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.640 "namespace": { 00:35:58.640 "nsid": 1, 00:35:58.640 "bdev_name": "malloc0", 00:35:58.640 "nguid": "C353284C24F24F7396046E0F42970BCB", 00:35:58.640 "uuid": "c353284c-24f2-4f73-9604-6e0f42970bcb", 00:35:58.640 "no_auto_visible": false 00:35:58.640 } 00:35:58.640 } 00:35:58.640 }, 00:35:58.640 { 00:35:58.640 "method": "nvmf_subsystem_add_listener", 00:35:58.640 "params": { 00:35:58.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.640 "listen_address": { 00:35:58.640 "trtype": "TCP", 00:35:58.640 "adrfam": "IPv4", 00:35:58.640 "traddr": "10.0.0.2", 00:35:58.640 "trsvcid": "4420" 00:35:58.640 }, 00:35:58.640 "secure_channel": true 00:35:58.640 } 00:35:58.640 } 00:35:58.640 ] 00:35:58.640 } 00:35:58.640 ] 00:35:58.640 }' 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2375398 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2375398 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2375398 ']' 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:58.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:58.640 11:44:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:58.901 [2024-06-10 11:44:27.629290] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:58.901 [2024-06-10 11:44:27.629344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.901 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.901 [2024-06-10 11:44:27.692424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.901 [2024-06-10 11:44:27.760135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.901 [2024-06-10 11:44:27.760172] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.901 [2024-06-10 11:44:27.760179] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.901 [2024-06-10 11:44:27.760186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.901 [2024-06-10 11:44:27.760192] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.901 [2024-06-10 11:44:27.760243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.162 [2024-06-10 11:44:27.957636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.162 [2024-06-10 11:44:27.989639] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:59.162 [2024-06-10 11:44:28.011844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2375724 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2375724 /var/tmp/bdevperf.sock 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2375724 ']' 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:59.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:59.734 11:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:35:59.734 "subsystems": [ 00:35:59.734 { 00:35:59.734 "subsystem": "keyring", 00:35:59.734 "config": [ 00:35:59.734 { 00:35:59.734 "method": "keyring_file_add_key", 00:35:59.734 "params": { 00:35:59.734 "name": "key0", 00:35:59.734 "path": "/tmp/tmp.4TOX5ikiIP" 00:35:59.735 } 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "iobuf", 00:35:59.735 "config": [ 00:35:59.735 { 00:35:59.735 "method": "iobuf_set_options", 00:35:59.735 "params": { 00:35:59.735 "small_pool_count": 8192, 00:35:59.735 "large_pool_count": 1024, 00:35:59.735 "small_bufsize": 8192, 00:35:59.735 "large_bufsize": 135168 00:35:59.735 } 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "sock", 00:35:59.735 "config": [ 00:35:59.735 { 00:35:59.735 "method": "sock_set_default_impl", 00:35:59.735 "params": { 00:35:59.735 "impl_name": "posix" 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "sock_impl_set_options", 00:35:59.735 "params": { 00:35:59.735 "impl_name": "ssl", 00:35:59.735 "recv_buf_size": 4096, 00:35:59.735 "send_buf_size": 4096, 00:35:59.735 "enable_recv_pipe": true, 00:35:59.735 "enable_quickack": false, 00:35:59.735 "enable_placement_id": 0, 00:35:59.735 "enable_zerocopy_send_server": true, 00:35:59.735 "enable_zerocopy_send_client": false, 00:35:59.735 "zerocopy_threshold": 0, 00:35:59.735 "tls_version": 0, 00:35:59.735 "enable_ktls": false 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "sock_impl_set_options", 00:35:59.735 "params": { 00:35:59.735 "impl_name": "posix", 00:35:59.735 "recv_buf_size": 2097152, 00:35:59.735 "send_buf_size": 2097152, 00:35:59.735 "enable_recv_pipe": true, 00:35:59.735 "enable_quickack": false, 00:35:59.735 "enable_placement_id": 0, 00:35:59.735 "enable_zerocopy_send_server": true, 00:35:59.735 "enable_zerocopy_send_client": false, 00:35:59.735 "zerocopy_threshold": 0, 00:35:59.735 "tls_version": 0, 00:35:59.735 "enable_ktls": false 00:35:59.735 } 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "vmd", 00:35:59.735 "config": [] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "accel", 00:35:59.735 "config": [ 00:35:59.735 { 00:35:59.735 "method": "accel_set_options", 00:35:59.735 "params": { 00:35:59.735 "small_cache_size": 128, 00:35:59.735 "large_cache_size": 16, 00:35:59.735 "task_count": 2048, 00:35:59.735 "sequence_count": 2048, 00:35:59.735 "buf_count": 2048 00:35:59.735 } 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "bdev", 00:35:59.735 "config": [ 00:35:59.735 { 00:35:59.735 "method": "bdev_set_options", 00:35:59.735 "params": { 00:35:59.735 "bdev_io_pool_size": 65535, 00:35:59.735 "bdev_io_cache_size": 256, 00:35:59.735 "bdev_auto_examine": true, 00:35:59.735 "iobuf_small_cache_size": 128, 00:35:59.735 "iobuf_large_cache_size": 16 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_raid_set_options", 00:35:59.735 "params": { 00:35:59.735 "process_window_size_kb": 1024 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_iscsi_set_options", 00:35:59.735 "params": { 00:35:59.735 "timeout_sec": 30 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_nvme_set_options", 00:35:59.735 "params": { 00:35:59.735 "action_on_timeout": "none", 00:35:59.735 "timeout_us": 0, 00:35:59.735 "timeout_admin_us": 0, 00:35:59.735 "keep_alive_timeout_ms": 10000, 00:35:59.735 "arbitration_burst": 0, 00:35:59.735 "low_priority_weight": 0, 00:35:59.735 "medium_priority_weight": 0, 00:35:59.735 "high_priority_weight": 0, 00:35:59.735 "nvme_adminq_poll_period_us": 10000, 00:35:59.735 "nvme_ioq_poll_period_us": 0, 00:35:59.735 "io_queue_requests": 512, 00:35:59.735 "delay_cmd_submit": true, 00:35:59.735 "transport_retry_count": 4, 00:35:59.735 "bdev_retry_count": 3, 00:35:59.735 "transport_ack_timeout": 0, 00:35:59.735 "ctrlr_loss_timeout_sec": 0, 00:35:59.735 "reconnect_delay_sec": 0, 00:35:59.735 "fast_io_fail_timeout_sec": 0, 00:35:59.735 "disable_auto_failback": false, 00:35:59.735 "generate_uuids": false, 00:35:59.735 "transport_tos": 0, 00:35:59.735 "nvme_error_stat": false, 00:35:59.735 "rdma_srq_size": 0, 00:35:59.735 "io_path_stat": false, 00:35:59.735 "allow_accel_sequence": false, 00:35:59.735 "rdma_max_cq_size": 0, 00:35:59.735 "rdma_cm_event_timeout_ms": 0, 00:35:59.735 "dhchap_digests": [ 00:35:59.735 "sha256", 00:35:59.735 "sha384", 00:35:59.735 "sha512" 00:35:59.735 ], 00:35:59.735 "dhchap_dhgroups": [ 00:35:59.735 "null", 00:35:59.735 "ffdhe2048", 00:35:59.735 "ffdhe3072", 00:35:59.735 "ffdhe4096", 00:35:59.735 "ffdhe6144", 00:35:59.735 "ffdhe8192" 00:35:59.735 ] 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_nvme_attach_controller", 00:35:59.735 "params": { 00:35:59.735 "name": "nvme0", 00:35:59.735 "trtype": "TCP", 00:35:59.735 "adrfam": "IPv4", 00:35:59.735 "traddr": "10.0.0.2", 00:35:59.735 "trsvcid": "4420", 00:35:59.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:59.735 "prchk_reftag": false, 00:35:59.735 "prchk_guard": false, 00:35:59.735 "ctrlr_loss_timeout_sec": 0, 00:35:59.735 "reconnect_delay_sec": 0, 00:35:59.735 "fast_io_fail_timeout_sec": 0, 00:35:59.735 "psk": "key0", 00:35:59.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:59.735 "hdgst": false, 00:35:59.735 "ddgst": false 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_nvme_set_hotplug", 00:35:59.735 "params": { 00:35:59.735 "period_us": 100000, 00:35:59.735 "enable": false 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_enable_histogram", 00:35:59.735 "params": { 00:35:59.735 "name": "nvme0n1", 00:35:59.735 "enable": true 00:35:59.735 } 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "method": "bdev_wait_for_examine" 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }, 00:35:59.735 { 00:35:59.735 "subsystem": "nbd", 00:35:59.735 "config": [] 00:35:59.735 } 00:35:59.735 ] 00:35:59.735 }' 00:35:59.735 [2024-06-10 11:44:28.574486] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:35:59.735 [2024-06-10 11:44:28.574538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375724 ] 00:35:59.736 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.736 [2024-06-10 11:44:28.633192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.736 [2024-06-10 11:44:28.697203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.996 [2024-06-10 11:44:28.835786] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:00.568 11:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:00.568 11:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:36:00.568 11:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:00.568 11:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:36:00.828 11:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.828 11:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:00.828 Running I/O for 1 seconds... 00:36:02.214 00:36:02.214 Latency(us) 00:36:02.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.214 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:02.214 Verification LBA range: start 0x0 length 0x2000 00:36:02.214 nvme0n1 : 1.03 2880.64 11.25 0.00 0.00 43829.84 6225.92 55487.15 00:36:02.214 =================================================================================================================== 00:36:02.214 Total : 2880.64 11.25 0.00 0.00 43829.84 6225.92 55487.15 00:36:02.214 0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:02.214 nvmf_trace.0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2375724 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2375724 ']' 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2375724 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2375724 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2375724' 00:36:02.214 killing process with pid 2375724 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2375724 00:36:02.214 Received shutdown signal, test time was about 1.000000 seconds 00:36:02.214 00:36:02.214 Latency(us) 00:36:02.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.214 =================================================================================================================== 00:36:02.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:02.214 11:44:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2375724 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.214 rmmod nvme_tcp 00:36:02.214 rmmod nvme_fabrics 00:36:02.214 rmmod nvme_keyring 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2375398 ']' 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2375398 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2375398 ']' 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2375398 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:02.214 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2375398 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2375398' 00:36:02.474 killing process with pid 2375398 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2375398 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2375398 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:02.474 11:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.020 11:44:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:05.020 11:44:33 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PEGsycjEm7 /tmp/tmp.SAmtMKKZcU /tmp/tmp.4TOX5ikiIP 00:36:05.020 00:36:05.020 real 1m19.571s 00:36:05.020 user 2m2.133s 00:36:05.020 sys 0m26.704s 00:36:05.020 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:05.020 11:44:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:36:05.020 ************************************ 00:36:05.020 END TEST nvmf_tls 00:36:05.020 ************************************ 00:36:05.020 11:44:33 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:36:05.020 11:44:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:05.020 11:44:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:05.020 11:44:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.020 ************************************ 00:36:05.020 START TEST nvmf_fips 00:36:05.020 ************************************ 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:36:05.020 * Looking for test storage... 00:36:05.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:36:05.020 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:36:05.021 Error setting digest 00:36:05.021 0002E8FF527F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:36:05.021 0002E8FF527F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:36:05.021 11:44:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.165 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:13.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:13.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:13.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:13.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:13.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:36:13.166 00:36:13.166 --- 10.0.0.2 ping statistics --- 00:36:13.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.166 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:36:13.166 00:36:13.166 --- 10.0.0.1 ping statistics --- 00:36:13.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.166 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2380424 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2380424 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2380424 ']' 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:13.166 11:44:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.166 [2024-06-10 11:44:41.036690] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:36:13.166 [2024-06-10 11:44:41.036761] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.166 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.166 [2024-06-10 11:44:41.105917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.166 [2024-06-10 11:44:41.181342] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.166 [2024-06-10 11:44:41.181379] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.166 [2024-06-10 11:44:41.181387] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.166 [2024-06-10 11:44:41.181393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.166 [2024-06-10 11:44:41.181399] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.166 [2024-06-10 11:44:41.181416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:13.166 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:36:13.167 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:13.167 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:13.167 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:13.167 11:44:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:13.167 [2024-06-10 11:44:42.108550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.167 [2024-06-10 11:44:42.124552] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:13.167 [2024-06-10 11:44:42.124755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:13.427 [2024-06-10 11:44:42.151383] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:13.427 malloc0 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2380733 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2380733 /var/tmp/bdevperf.sock 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2380733 ']' 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:13.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:13.427 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:13.427 [2024-06-10 11:44:42.246470] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:36:13.427 [2024-06-10 11:44:42.246522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380733 ] 00:36:13.427 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.427 [2024-06-10 11:44:42.296129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.427 [2024-06-10 11:44:42.348429] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:13.687 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:13.687 11:44:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:36:13.687 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:13.687 [2024-06-10 11:44:42.608073] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:13.687 [2024-06-10 11:44:42.608130] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:36:13.947 TLSTESTn1 00:36:13.947 11:44:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:13.947 Running I/O for 10 seconds... 00:36:24.025 00:36:24.025 Latency(us) 00:36:24.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:24.025 Verification LBA range: start 0x0 length 0x2000 00:36:24.025 TLSTESTn1 : 10.08 2476.90 9.68 0.00 0.00 51496.05 5898.24 93061.12 00:36:24.025 =================================================================================================================== 00:36:24.025 Total : 2476.90 9.68 0.00 0.00 51496.05 5898.24 93061.12 00:36:24.025 0 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:36:24.025 11:44:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:24.025 nvmf_trace.0 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2380733 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2380733 ']' 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2380733 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2380733 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2380733' 00:36:24.285 killing process with pid 2380733 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2380733 00:36:24.285 Received shutdown signal, test time was about 10.000000 seconds 00:36:24.285 00:36:24.285 Latency(us) 00:36:24.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.285 =================================================================================================================== 00:36:24.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:24.285 [2024-06-10 11:44:53.085774] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2380733 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:36:24.285 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:24.286 rmmod nvme_tcp 00:36:24.286 rmmod nvme_fabrics 00:36:24.286 rmmod nvme_keyring 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2380424 ']' 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2380424 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2380424 ']' 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2380424 00:36:24.286 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2380424 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2380424' 00:36:24.546 killing process with pid 2380424 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2380424 00:36:24.546 [2024-06-10 11:44:53.310108] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2380424 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:24.546 11:44:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:27.091 11:44:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:27.091 11:44:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:36:27.091 00:36:27.091 real 0m22.014s 00:36:27.091 user 0m23.076s 00:36:27.091 sys 0m9.241s 00:36:27.091 11:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:27.091 11:44:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:27.091 ************************************ 00:36:27.091 END TEST nvmf_fips 00:36:27.091 ************************************ 00:36:27.091 11:44:55 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:36:27.091 11:44:55 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:36:27.091 11:44:55 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:36:27.091 11:44:55 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:36:27.091 11:44:55 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:36:27.091 11:44:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:33.678 11:45:02 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:33.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:33.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:33.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:33.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:36:33.679 11:45:02 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:36:33.679 11:45:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:33.679 11:45:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:33.679 11:45:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.679 ************************************ 00:36:33.679 START TEST nvmf_perf_adq 00:36:33.679 ************************************ 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:36:33.679 * Looking for test storage... 00:36:33.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.679 11:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:36:33.680 11:45:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:40.273 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:40.273 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:40.273 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:40.273 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:36:40.273 11:45:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:36:42.188 11:45:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:36:44.102 11:45:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:49.394 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:49.395 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:49.395 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:49.395 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:49.395 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.395 11:45:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:49.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:36:49.395 00:36:49.395 --- 10.0.0.2 ping statistics --- 00:36:49.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.395 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:36:49.395 00:36:49.395 --- 10.0.0.1 ping statistics --- 00:36:49.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.395 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2392903 00:36:49.395 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2392903 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2392903 ']' 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:49.396 11:45:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:49.396 [2024-06-10 11:45:18.267542] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:36:49.396 [2024-06-10 11:45:18.267589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:49.396 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.396 [2024-06-10 11:45:18.333126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:49.656 [2024-06-10 11:45:18.399638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:49.656 [2024-06-10 11:45:18.399681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:49.656 [2024-06-10 11:45:18.399689] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:49.656 [2024-06-10 11:45:18.399696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:49.656 [2024-06-10 11:45:18.399702] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:49.656 [2024-06-10 11:45:18.402689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:49.656 [2024-06-10 11:45:18.402947] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:49.656 [2024-06-10 11:45:18.403094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.656 [2024-06-10 11:45:18.403094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.228 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 [2024-06-10 11:45:19.260647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 Malloc1 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:50.489 [2024-06-10 11:45:19.317447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2393253 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:36:50.489 11:45:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:50.489 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:36:52.405 "tick_rate": 2400000000, 00:36:52.405 "poll_groups": [ 00:36:52.405 { 00:36:52.405 "name": "nvmf_tgt_poll_group_000", 00:36:52.405 "admin_qpairs": 1, 00:36:52.405 "io_qpairs": 1, 00:36:52.405 "current_admin_qpairs": 1, 00:36:52.405 "current_io_qpairs": 1, 00:36:52.405 "pending_bdev_io": 0, 00:36:52.405 "completed_nvme_io": 19148, 00:36:52.405 "transports": [ 00:36:52.405 { 00:36:52.405 "trtype": "TCP" 00:36:52.405 } 00:36:52.405 ] 00:36:52.405 }, 00:36:52.405 { 00:36:52.405 "name": "nvmf_tgt_poll_group_001", 00:36:52.405 "admin_qpairs": 0, 00:36:52.405 "io_qpairs": 1, 00:36:52.405 "current_admin_qpairs": 0, 00:36:52.405 "current_io_qpairs": 1, 00:36:52.405 "pending_bdev_io": 0, 00:36:52.405 "completed_nvme_io": 28640, 00:36:52.405 "transports": [ 00:36:52.405 { 00:36:52.405 "trtype": "TCP" 00:36:52.405 } 00:36:52.405 ] 00:36:52.405 }, 00:36:52.405 { 00:36:52.405 "name": "nvmf_tgt_poll_group_002", 00:36:52.405 "admin_qpairs": 0, 00:36:52.405 "io_qpairs": 1, 00:36:52.405 "current_admin_qpairs": 0, 00:36:52.405 "current_io_qpairs": 1, 00:36:52.405 "pending_bdev_io": 0, 00:36:52.405 "completed_nvme_io": 20806, 00:36:52.405 "transports": [ 00:36:52.405 { 00:36:52.405 "trtype": "TCP" 00:36:52.405 } 00:36:52.405 ] 00:36:52.405 }, 00:36:52.405 { 00:36:52.405 "name": "nvmf_tgt_poll_group_003", 00:36:52.405 "admin_qpairs": 0, 00:36:52.405 "io_qpairs": 1, 00:36:52.405 "current_admin_qpairs": 0, 00:36:52.405 "current_io_qpairs": 1, 00:36:52.405 "pending_bdev_io": 0, 00:36:52.405 "completed_nvme_io": 21284, 00:36:52.405 "transports": [ 00:36:52.405 { 00:36:52.405 "trtype": "TCP" 00:36:52.405 } 00:36:52.405 ] 00:36:52.405 } 00:36:52.405 ] 00:36:52.405 }' 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:36:52.405 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:36:52.666 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:36:52.666 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:36:52.666 11:45:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2393253 00:37:00.809 Initializing NVMe Controllers 00:37:00.809 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:37:00.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:37:00.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:37:00.809 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:37:00.809 Initialization complete. Launching workers. 00:37:00.809 ======================================================== 00:37:00.809 Latency(us) 00:37:00.809 Device Information : IOPS MiB/s Average min max 00:37:00.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11102.00 43.37 5783.68 1392.70 46338.28 00:37:00.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15230.20 59.49 4201.93 885.97 8541.73 00:37:00.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10269.20 40.11 6231.94 1479.21 11763.52 00:37:00.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11359.50 44.37 5633.47 1681.74 11540.17 00:37:00.809 ======================================================== 00:37:00.809 Total : 47960.90 187.35 5341.79 885.97 46338.28 00:37:00.809 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:00.809 rmmod nvme_tcp 00:37:00.809 rmmod nvme_fabrics 00:37:00.809 rmmod nvme_keyring 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2392903 ']' 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2392903 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2392903 ']' 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2392903 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2392903 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2392903' 00:37:00.809 killing process with pid 2392903 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2392903 00:37:00.809 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2392903 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:01.070 11:45:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.984 11:45:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:02.984 11:45:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:37:02.984 11:45:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:37:04.366 11:45:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:37:06.278 11:45:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:11.621 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:11.622 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:11.622 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:11.622 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:11.622 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:11.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:37:11.622 00:37:11.622 --- 10.0.0.2 ping statistics --- 00:37:11.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.622 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:37:11.622 00:37:11.622 --- 10.0.0.1 ping statistics --- 00:37:11.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.622 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:37:11.622 net.core.busy_poll = 1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:37:11.622 net.core.busy_read = 1 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:37:11.622 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2397724 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2397724 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2397724 ']' 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:11.891 11:45:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:11.891 [2024-06-10 11:45:40.736980] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:11.891 [2024-06-10 11:45:40.737049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.891 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.891 [2024-06-10 11:45:40.807415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.151 [2024-06-10 11:45:40.882744] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.151 [2024-06-10 11:45:40.882782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.151 [2024-06-10 11:45:40.882790] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.151 [2024-06-10 11:45:40.882797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.151 [2024-06-10 11:45:40.882803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.151 [2024-06-10 11:45:40.882848] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.151 [2024-06-10 11:45:40.882986] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.151 [2024-06-10 11:45:40.883144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.151 [2024-06-10 11:45:40.883145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.721 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.981 [2024-06-10 11:45:41.773950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.981 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.982 Malloc1 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:12.982 [2024-06-10 11:45:41.833241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2398077 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:37:12.982 11:45:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:12.982 EAL: No free 2048 kB hugepages reported on node 1 00:37:14.893 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:37:14.893 11:45:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:14.893 11:45:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:14.893 11:45:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:37:15.155 "tick_rate": 2400000000, 00:37:15.155 "poll_groups": [ 00:37:15.155 { 00:37:15.155 "name": "nvmf_tgt_poll_group_000", 00:37:15.155 "admin_qpairs": 1, 00:37:15.155 "io_qpairs": 1, 00:37:15.155 "current_admin_qpairs": 1, 00:37:15.155 "current_io_qpairs": 1, 00:37:15.155 "pending_bdev_io": 0, 00:37:15.155 "completed_nvme_io": 26666, 00:37:15.155 "transports": [ 00:37:15.155 { 00:37:15.155 "trtype": "TCP" 00:37:15.155 } 00:37:15.155 ] 00:37:15.155 }, 00:37:15.155 { 00:37:15.155 "name": "nvmf_tgt_poll_group_001", 00:37:15.155 "admin_qpairs": 0, 00:37:15.155 "io_qpairs": 3, 00:37:15.155 "current_admin_qpairs": 0, 00:37:15.155 "current_io_qpairs": 3, 00:37:15.155 "pending_bdev_io": 0, 00:37:15.155 "completed_nvme_io": 43216, 00:37:15.155 "transports": [ 00:37:15.155 { 00:37:15.155 "trtype": "TCP" 00:37:15.155 } 00:37:15.155 ] 00:37:15.155 }, 00:37:15.155 { 00:37:15.155 "name": "nvmf_tgt_poll_group_002", 00:37:15.155 "admin_qpairs": 0, 00:37:15.155 "io_qpairs": 0, 00:37:15.155 "current_admin_qpairs": 0, 00:37:15.155 "current_io_qpairs": 0, 00:37:15.155 "pending_bdev_io": 0, 00:37:15.155 "completed_nvme_io": 0, 00:37:15.155 "transports": [ 00:37:15.155 { 00:37:15.155 "trtype": "TCP" 00:37:15.155 } 00:37:15.155 ] 00:37:15.155 }, 00:37:15.155 { 00:37:15.155 "name": "nvmf_tgt_poll_group_003", 00:37:15.155 "admin_qpairs": 0, 00:37:15.155 "io_qpairs": 0, 00:37:15.155 "current_admin_qpairs": 0, 00:37:15.155 "current_io_qpairs": 0, 00:37:15.155 "pending_bdev_io": 0, 00:37:15.155 "completed_nvme_io": 0, 00:37:15.155 "transports": [ 00:37:15.155 { 00:37:15.155 "trtype": "TCP" 00:37:15.155 } 00:37:15.155 ] 00:37:15.155 } 00:37:15.155 ] 00:37:15.155 }' 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:37:15.155 11:45:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2398077 00:37:23.296 Initializing NVMe Controllers 00:37:23.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:23.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:37:23.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:37:23.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:37:23.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:37:23.296 Initialization complete. Launching workers. 00:37:23.296 ======================================================== 00:37:23.296 Latency(us) 00:37:23.296 Device Information : IOPS MiB/s Average min max 00:37:23.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14648.10 57.22 4380.08 1740.68 46327.88 00:37:23.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6303.70 24.62 10184.04 1717.33 54776.77 00:37:23.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8560.70 33.44 7476.14 1353.82 52248.75 00:37:23.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8033.00 31.38 7967.09 1113.06 54501.36 00:37:23.296 ======================================================== 00:37:23.296 Total : 37545.50 146.66 6827.92 1113.06 54776.77 00:37:23.296 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:23.296 rmmod nvme_tcp 00:37:23.296 rmmod nvme_fabrics 00:37:23.296 rmmod nvme_keyring 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2397724 ']' 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2397724 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2397724 ']' 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2397724 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2397724 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2397724' 00:37:23.296 killing process with pid 2397724 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2397724 00:37:23.296 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2397724 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:23.558 11:45:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.864 11:45:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:26.864 11:45:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:26.864 00:37:26.864 real 0m53.088s 00:37:26.864 user 2m47.455s 00:37:26.864 sys 0m11.863s 00:37:26.864 11:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:26.864 11:45:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:37:26.864 ************************************ 00:37:26.864 END TEST nvmf_perf_adq 00:37:26.864 ************************************ 00:37:26.864 11:45:55 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:37:26.864 11:45:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:26.864 11:45:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:26.864 11:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.864 ************************************ 00:37:26.864 START TEST nvmf_shutdown 00:37:26.864 ************************************ 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:37:26.864 * Looking for test storage... 00:37:26.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:26.864 ************************************ 00:37:26.864 START TEST nvmf_shutdown_tc1 00:37:26.864 ************************************ 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:37:26.864 11:45:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:35.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:35.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:35.019 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:35.019 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.019 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:35.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:37:35.020 00:37:35.020 --- 10.0.0.2 ping statistics --- 00:37:35.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.020 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:37:35.020 00:37:35.020 --- 10.0.0.1 ping statistics --- 00:37:35.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.020 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:35.020 11:46:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2404530 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2404530 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2404530 ']' 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.020 [2024-06-10 11:46:03.084302] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:35.020 [2024-06-10 11:46:03.084363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.020 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.020 [2024-06-10 11:46:03.154834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.020 [2024-06-10 11:46:03.229488] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.020 [2024-06-10 11:46:03.229529] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.020 [2024-06-10 11:46:03.229536] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.020 [2024-06-10 11:46:03.229543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.020 [2024-06-10 11:46:03.229549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.020 [2024-06-10 11:46:03.229698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:35.020 [2024-06-10 11:46:03.229888] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.020 [2024-06-10 11:46:03.230046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.020 [2024-06-10 11:46:03.230046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.020 [2024-06-10 11:46:03.964424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.020 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.281 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.281 Malloc1 00:37:35.281 [2024-06-10 11:46:04.067908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.281 Malloc2 00:37:35.281 Malloc3 00:37:35.281 Malloc4 00:37:35.281 Malloc5 00:37:35.281 Malloc6 00:37:35.542 Malloc7 00:37:35.542 Malloc8 00:37:35.542 Malloc9 00:37:35.542 Malloc10 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2404912 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2404912 /var/tmp/bdevperf.sock 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2404912 ']' 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:35.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.542 { 00:37:35.542 "params": { 00:37:35.542 "name": "Nvme$subsystem", 00:37:35.542 "trtype": "$TEST_TRANSPORT", 00:37:35.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.542 "adrfam": "ipv4", 00:37:35.542 "trsvcid": "$NVMF_PORT", 00:37:35.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.542 "hdgst": ${hdgst:-false}, 00:37:35.542 "ddgst": ${ddgst:-false} 00:37:35.542 }, 00:37:35.542 "method": "bdev_nvme_attach_controller" 00:37:35.542 } 00:37:35.542 EOF 00:37:35.542 )") 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.542 { 00:37:35.542 "params": { 00:37:35.542 "name": "Nvme$subsystem", 00:37:35.542 "trtype": "$TEST_TRANSPORT", 00:37:35.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.542 "adrfam": "ipv4", 00:37:35.542 "trsvcid": "$NVMF_PORT", 00:37:35.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.542 "hdgst": ${hdgst:-false}, 00:37:35.542 "ddgst": ${ddgst:-false} 00:37:35.542 }, 00:37:35.542 "method": "bdev_nvme_attach_controller" 00:37:35.542 } 00:37:35.542 EOF 00:37:35.542 )") 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.542 { 00:37:35.542 "params": { 00:37:35.542 "name": "Nvme$subsystem", 00:37:35.542 "trtype": "$TEST_TRANSPORT", 00:37:35.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.542 "adrfam": "ipv4", 00:37:35.542 "trsvcid": "$NVMF_PORT", 00:37:35.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.542 "hdgst": ${hdgst:-false}, 00:37:35.542 "ddgst": ${ddgst:-false} 00:37:35.542 }, 00:37:35.542 "method": "bdev_nvme_attach_controller" 00:37:35.542 } 00:37:35.542 EOF 00:37:35.542 )") 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.542 { 00:37:35.542 "params": { 00:37:35.542 "name": "Nvme$subsystem", 00:37:35.542 "trtype": "$TEST_TRANSPORT", 00:37:35.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.542 "adrfam": "ipv4", 00:37:35.542 "trsvcid": "$NVMF_PORT", 00:37:35.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.542 "hdgst": ${hdgst:-false}, 00:37:35.542 "ddgst": ${ddgst:-false} 00:37:35.542 }, 00:37:35.542 "method": "bdev_nvme_attach_controller" 00:37:35.542 } 00:37:35.542 EOF 00:37:35.542 )") 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.542 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.542 { 00:37:35.542 "params": { 00:37:35.543 "name": "Nvme$subsystem", 00:37:35.543 "trtype": "$TEST_TRANSPORT", 00:37:35.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.543 "adrfam": "ipv4", 00:37:35.543 "trsvcid": "$NVMF_PORT", 00:37:35.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.543 "hdgst": ${hdgst:-false}, 00:37:35.543 "ddgst": ${ddgst:-false} 00:37:35.543 }, 00:37:35.543 "method": "bdev_nvme_attach_controller" 00:37:35.543 } 00:37:35.543 EOF 00:37:35.543 )") 00:37:35.543 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.543 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.543 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.543 { 00:37:35.543 "params": { 00:37:35.543 "name": "Nvme$subsystem", 00:37:35.543 "trtype": "$TEST_TRANSPORT", 00:37:35.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.543 "adrfam": "ipv4", 00:37:35.543 "trsvcid": "$NVMF_PORT", 00:37:35.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.543 "hdgst": ${hdgst:-false}, 00:37:35.543 "ddgst": ${ddgst:-false} 00:37:35.543 }, 00:37:35.543 "method": "bdev_nvme_attach_controller" 00:37:35.543 } 00:37:35.543 EOF 00:37:35.543 )") 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.804 [2024-06-10 11:46:04.515821] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:35.804 [2024-06-10 11:46:04.515872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.804 { 00:37:35.804 "params": { 00:37:35.804 "name": "Nvme$subsystem", 00:37:35.804 "trtype": "$TEST_TRANSPORT", 00:37:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.804 "adrfam": "ipv4", 00:37:35.804 "trsvcid": "$NVMF_PORT", 00:37:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.804 "hdgst": ${hdgst:-false}, 00:37:35.804 "ddgst": ${ddgst:-false} 00:37:35.804 }, 00:37:35.804 "method": "bdev_nvme_attach_controller" 00:37:35.804 } 00:37:35.804 EOF 00:37:35.804 )") 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.804 { 00:37:35.804 "params": { 00:37:35.804 "name": "Nvme$subsystem", 00:37:35.804 "trtype": "$TEST_TRANSPORT", 00:37:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.804 "adrfam": "ipv4", 00:37:35.804 "trsvcid": "$NVMF_PORT", 00:37:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.804 "hdgst": ${hdgst:-false}, 00:37:35.804 "ddgst": ${ddgst:-false} 00:37:35.804 }, 00:37:35.804 "method": "bdev_nvme_attach_controller" 00:37:35.804 } 00:37:35.804 EOF 00:37:35.804 )") 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.804 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.804 { 00:37:35.804 "params": { 00:37:35.804 "name": "Nvme$subsystem", 00:37:35.804 "trtype": "$TEST_TRANSPORT", 00:37:35.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.804 "adrfam": "ipv4", 00:37:35.804 "trsvcid": "$NVMF_PORT", 00:37:35.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.804 "hdgst": ${hdgst:-false}, 00:37:35.804 "ddgst": ${ddgst:-false} 00:37:35.804 }, 00:37:35.804 "method": "bdev_nvme_attach_controller" 00:37:35.804 } 00:37:35.804 EOF 00:37:35.804 )") 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.805 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:35.805 { 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme$subsystem", 00:37:35.805 "trtype": "$TEST_TRANSPORT", 00:37:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "$NVMF_PORT", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.805 "hdgst": ${hdgst:-false}, 00:37:35.805 "ddgst": ${ddgst:-false} 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 } 00:37:35.805 EOF 00:37:35.805 )") 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:37:35.805 11:46:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme1", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme2", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme3", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme4", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme5", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme6", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme7", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme8", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme9", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 },{ 00:37:35.805 "params": { 00:37:35.805 "name": "Nvme10", 00:37:35.805 "trtype": "tcp", 00:37:35.805 "traddr": "10.0.0.2", 00:37:35.805 "adrfam": "ipv4", 00:37:35.805 "trsvcid": "4420", 00:37:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:37:35.805 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:37:35.805 "hdgst": false, 00:37:35.805 "ddgst": false 00:37:35.805 }, 00:37:35.805 "method": "bdev_nvme_attach_controller" 00:37:35.805 }' 00:37:35.805 [2024-06-10 11:46:04.576073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.805 [2024-06-10 11:46:04.640915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2404912 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:37:37.191 11:46:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:37:38.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2404912 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2404530 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 [2024-06-10 11:46:06.952774] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:38.135 [2024-06-10 11:46:06.952826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405290 ] 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.135 "trsvcid": "$NVMF_PORT", 00:37:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.135 "hdgst": ${hdgst:-false}, 00:37:38.135 "ddgst": ${ddgst:-false} 00:37:38.135 }, 00:37:38.135 "method": "bdev_nvme_attach_controller" 00:37:38.135 } 00:37:38.135 EOF 00:37:38.135 )") 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:38.135 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:38.135 { 00:37:38.135 "params": { 00:37:38.135 "name": "Nvme$subsystem", 00:37:38.135 "trtype": "$TEST_TRANSPORT", 00:37:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:38.135 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "$NVMF_PORT", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:38.136 "hdgst": ${hdgst:-false}, 00:37:38.136 "ddgst": ${ddgst:-false} 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 } 00:37:38.136 EOF 00:37:38.136 )") 00:37:38.136 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:37:38.136 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:37:38.136 EAL: No free 2048 kB hugepages reported on node 1 00:37:38.136 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:37:38.136 11:46:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme1", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme2", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme3", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme4", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme5", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme6", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme7", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme8", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme9", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 },{ 00:37:38.136 "params": { 00:37:38.136 "name": "Nvme10", 00:37:38.136 "trtype": "tcp", 00:37:38.136 "traddr": "10.0.0.2", 00:37:38.136 "adrfam": "ipv4", 00:37:38.136 "trsvcid": "4420", 00:37:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:37:38.136 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:37:38.136 "hdgst": false, 00:37:38.136 "ddgst": false 00:37:38.136 }, 00:37:38.136 "method": "bdev_nvme_attach_controller" 00:37:38.136 }' 00:37:38.136 [2024-06-10 11:46:07.013163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.136 [2024-06-10 11:46:07.078327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.521 Running I/O for 1 seconds... 00:37:40.908 00:37:40.908 Latency(us) 00:37:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.908 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme1n1 : 1.09 234.08 14.63 0.00 0.00 270334.08 21736.11 263891.63 00:37:40.908 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme2n1 : 1.09 240.03 15.00 0.00 0.00 252745.07 19879.25 213210.45 00:37:40.908 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme3n1 : 1.05 243.07 15.19 0.00 0.00 251080.75 22173.01 235929.60 00:37:40.908 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme4n1 : 1.10 232.81 14.55 0.00 0.00 257820.80 19442.35 219327.15 00:37:40.908 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme5n1 : 1.10 231.81 14.49 0.00 0.00 254375.47 20534.61 263891.63 00:37:40.908 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme6n1 : 1.13 225.76 14.11 0.00 0.00 256970.45 21954.56 249910.61 00:37:40.908 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme7n1 : 1.14 280.57 17.54 0.00 0.00 203057.49 21736.11 239424.85 00:37:40.908 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme8n1 : 1.17 274.26 17.14 0.00 0.00 204474.03 16493.23 269134.51 00:37:40.908 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme9n1 : 1.17 272.55 17.03 0.00 0.00 202245.29 11632.64 255153.49 00:37:40.908 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:40.908 Verification LBA range: start 0x0 length 0x400 00:37:40.908 Nvme10n1 : 1.19 269.58 16.85 0.00 0.00 201101.82 11741.87 272629.76 00:37:40.908 =================================================================================================================== 00:37:40.908 Total : 2504.53 156.53 0.00 0.00 232483.69 11632.64 272629.76 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:40.908 rmmod nvme_tcp 00:37:40.908 rmmod nvme_fabrics 00:37:40.908 rmmod nvme_keyring 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2404530 ']' 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2404530 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 2404530 ']' 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 2404530 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2404530 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2404530' 00:37:40.908 killing process with pid 2404530 00:37:40.908 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 2404530 00:37:40.909 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 2404530 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.170 11:46:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.083 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:43.343 00:37:43.343 real 0m16.409s 00:37:43.343 user 0m32.783s 00:37:43.343 sys 0m6.710s 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:43.343 ************************************ 00:37:43.343 END TEST nvmf_shutdown_tc1 00:37:43.343 ************************************ 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:43.343 ************************************ 00:37:43.343 START TEST nvmf_shutdown_tc2 00:37:43.343 ************************************ 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:43.343 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:43.344 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:43.344 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:43.344 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:43.344 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:43.344 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:43.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:37:43.605 00:37:43.605 --- 10.0.0.2 ping statistics --- 00:37:43.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.605 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:37:43.605 00:37:43.605 --- 10.0.0.1 ping statistics --- 00:37:43.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.605 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2406559 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2406559 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2406559 ']' 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.605 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:37:43.605 [2024-06-10 11:46:12.498344] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:43.605 [2024-06-10 11:46:12.498393] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.605 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.605 [2024-06-10 11:46:12.563288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:43.866 [2024-06-10 11:46:12.631099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.866 [2024-06-10 11:46:12.631133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.866 [2024-06-10 11:46:12.631141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.866 [2024-06-10 11:46:12.631148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.866 [2024-06-10 11:46:12.631154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.866 [2024-06-10 11:46:12.631259] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.866 [2024-06-10 11:46:12.631416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:43.866 [2024-06-10 11:46:12.631572] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.866 [2024-06-10 11:46:12.631573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.866 [2024-06-10 11:46:12.761515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:43.866 11:46:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:44.127 Malloc1 00:37:44.127 [2024-06-10 11:46:12.860966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.127 Malloc2 00:37:44.127 Malloc3 00:37:44.127 Malloc4 00:37:44.127 Malloc5 00:37:44.127 Malloc6 00:37:44.127 Malloc7 00:37:44.393 Malloc8 00:37:44.393 Malloc9 00:37:44.393 Malloc10 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2406777 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2406777 /var/tmp/bdevperf.sock 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2406777 ']' 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:44.393 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 [2024-06-10 11:46:13.305293] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:44.394 [2024-06-10 11:46:13.305342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406777 ] 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:44.394 { 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme$subsystem", 00:37:44.394 "trtype": "$TEST_TRANSPORT", 00:37:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "$NVMF_PORT", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:44.394 "hdgst": ${hdgst:-false}, 00:37:44.394 "ddgst": ${ddgst:-false} 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 } 00:37:44.394 EOF 00:37:44.394 )") 00:37:44.394 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:37:44.394 11:46:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme1", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme2", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme3", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme4", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme5", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme6", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:37:44.394 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:37:44.394 "hdgst": false, 00:37:44.394 "ddgst": false 00:37:44.394 }, 00:37:44.394 "method": "bdev_nvme_attach_controller" 00:37:44.394 },{ 00:37:44.394 "params": { 00:37:44.394 "name": "Nvme7", 00:37:44.394 "trtype": "tcp", 00:37:44.394 "traddr": "10.0.0.2", 00:37:44.394 "adrfam": "ipv4", 00:37:44.394 "trsvcid": "4420", 00:37:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:37:44.395 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:37:44.395 "hdgst": false, 00:37:44.395 "ddgst": false 00:37:44.395 }, 00:37:44.395 "method": "bdev_nvme_attach_controller" 00:37:44.395 },{ 00:37:44.395 "params": { 00:37:44.395 "name": "Nvme8", 00:37:44.395 "trtype": "tcp", 00:37:44.395 "traddr": "10.0.0.2", 00:37:44.395 "adrfam": "ipv4", 00:37:44.395 "trsvcid": "4420", 00:37:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:37:44.395 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:37:44.395 "hdgst": false, 00:37:44.395 "ddgst": false 00:37:44.395 }, 00:37:44.395 "method": "bdev_nvme_attach_controller" 00:37:44.395 },{ 00:37:44.395 "params": { 00:37:44.395 "name": "Nvme9", 00:37:44.395 "trtype": "tcp", 00:37:44.395 "traddr": "10.0.0.2", 00:37:44.395 "adrfam": "ipv4", 00:37:44.395 "trsvcid": "4420", 00:37:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:37:44.395 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:37:44.395 "hdgst": false, 00:37:44.395 "ddgst": false 00:37:44.395 }, 00:37:44.395 "method": "bdev_nvme_attach_controller" 00:37:44.395 },{ 00:37:44.395 "params": { 00:37:44.395 "name": "Nvme10", 00:37:44.395 "trtype": "tcp", 00:37:44.395 "traddr": "10.0.0.2", 00:37:44.395 "adrfam": "ipv4", 00:37:44.395 "trsvcid": "4420", 00:37:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:37:44.395 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:37:44.395 "hdgst": false, 00:37:44.395 "ddgst": false 00:37:44.395 }, 00:37:44.395 "method": "bdev_nvme_attach_controller" 00:37:44.395 }' 00:37:44.657 [2024-06-10 11:46:13.364626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.657 [2024-06-10 11:46:13.429509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.569 Running I/O for 10 seconds... 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:37:46.569 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2406777 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2406777 ']' 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2406777 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2406777 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2406777' 00:37:46.830 killing process with pid 2406777 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2406777 00:37:46.830 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2406777 00:37:46.830 Received shutdown signal, test time was about 0.701657 seconds 00:37:46.830 00:37:46.830 Latency(us) 00:37:46.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.830 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.830 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme1n1 : 0.69 277.50 17.34 0.00 0.00 226766.22 20753.07 241172.48 00:37:46.831 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme2n1 : 0.67 191.68 11.98 0.00 0.00 318800.64 25340.59 253405.87 00:37:46.831 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme3n1 : 0.69 278.89 17.43 0.00 0.00 212831.00 21736.11 248162.99 00:37:46.831 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme4n1 : 0.70 275.44 17.21 0.00 0.00 208903.96 19770.03 227191.47 00:37:46.831 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme5n1 : 0.70 274.00 17.13 0.00 0.00 203845.40 28398.93 230686.72 00:37:46.831 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme6n1 : 0.68 189.12 11.82 0.00 0.00 283834.88 19551.57 258648.75 00:37:46.831 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme7n1 : 0.70 275.05 17.19 0.00 0.00 189842.20 28835.84 248162.99 00:37:46.831 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme8n1 : 0.68 189.43 11.84 0.00 0.00 262427.31 33423.36 225443.84 00:37:46.831 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme9n1 : 0.66 193.27 12.08 0.00 0.00 248103.25 19442.35 227191.47 00:37:46.831 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:46.831 Verification LBA range: start 0x0 length 0x400 00:37:46.831 Nvme10n1 : 0.68 186.95 11.68 0.00 0.00 249823.57 23702.19 279620.27 00:37:46.831 =================================================================================================================== 00:37:46.831 Total : 2331.35 145.71 0.00 0.00 234101.83 19442.35 279620.27 00:37:47.092 11:46:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2406559 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:48.114 rmmod nvme_tcp 00:37:48.114 rmmod nvme_fabrics 00:37:48.114 rmmod nvme_keyring 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2406559 ']' 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2406559 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2406559 ']' 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2406559 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:48.114 11:46:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2406559 00:37:48.114 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:48.114 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:48.115 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2406559' 00:37:48.115 killing process with pid 2406559 00:37:48.115 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2406559 00:37:48.115 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2406559 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:48.375 11:46:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:50.924 00:37:50.924 real 0m7.208s 00:37:50.924 user 0m21.210s 00:37:50.924 sys 0m1.111s 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:50.924 ************************************ 00:37:50.924 END TEST nvmf_shutdown_tc2 00:37:50.924 ************************************ 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:50.924 ************************************ 00:37:50.924 START TEST nvmf_shutdown_tc3 00:37:50.924 ************************************ 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:37:50.924 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:50.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:50.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:50.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:50.925 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:50.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:37:50.925 00:37:50.925 --- 10.0.0.2 ping statistics --- 00:37:50.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.925 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:37:50.925 00:37:50.925 --- 10.0.0.1 ping statistics --- 00:37:50.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.925 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2408236 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2408236 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2408236 ']' 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:50.925 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.926 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:50.926 11:46:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:50.926 [2024-06-10 11:46:19.843750] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:50.926 [2024-06-10 11:46:19.843796] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:50.926 EAL: No free 2048 kB hugepages reported on node 1 00:37:51.187 [2024-06-10 11:46:19.909596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:51.187 [2024-06-10 11:46:19.974481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:51.187 [2024-06-10 11:46:19.974519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:51.187 [2024-06-10 11:46:19.974527] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:51.187 [2024-06-10 11:46:19.974533] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:51.187 [2024-06-10 11:46:19.974539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:51.187 [2024-06-10 11:46:19.974647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.187 [2024-06-10 11:46:19.974807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:51.187 [2024-06-10 11:46:19.974950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.187 [2024-06-10 11:46:19.974951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.187 [2024-06-10 11:46:20.118513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.187 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:51.448 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.448 Malloc1 00:37:51.448 [2024-06-10 11:46:20.221847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.448 Malloc2 00:37:51.448 Malloc3 00:37:51.448 Malloc4 00:37:51.448 Malloc5 00:37:51.448 Malloc6 00:37:51.710 Malloc7 00:37:51.710 Malloc8 00:37:51.710 Malloc9 00:37:51.710 Malloc10 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2408297 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2408297 /var/tmp/bdevperf.sock 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2408297 ']' 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:51.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.710 { 00:37:51.710 "params": { 00:37:51.710 "name": "Nvme$subsystem", 00:37:51.710 "trtype": "$TEST_TRANSPORT", 00:37:51.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.710 "adrfam": "ipv4", 00:37:51.710 "trsvcid": "$NVMF_PORT", 00:37:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.710 "hdgst": ${hdgst:-false}, 00:37:51.710 "ddgst": ${ddgst:-false} 00:37:51.710 }, 00:37:51.710 "method": "bdev_nvme_attach_controller" 00:37:51.710 } 00:37:51.710 EOF 00:37:51.710 )") 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.710 { 00:37:51.710 "params": { 00:37:51.710 "name": "Nvme$subsystem", 00:37:51.710 "trtype": "$TEST_TRANSPORT", 00:37:51.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.710 "adrfam": "ipv4", 00:37:51.710 "trsvcid": "$NVMF_PORT", 00:37:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.710 "hdgst": ${hdgst:-false}, 00:37:51.710 "ddgst": ${ddgst:-false} 00:37:51.710 }, 00:37:51.710 "method": "bdev_nvme_attach_controller" 00:37:51.710 } 00:37:51.710 EOF 00:37:51.710 )") 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.710 { 00:37:51.710 "params": { 00:37:51.710 "name": "Nvme$subsystem", 00:37:51.710 "trtype": "$TEST_TRANSPORT", 00:37:51.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.710 "adrfam": "ipv4", 00:37:51.710 "trsvcid": "$NVMF_PORT", 00:37:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.710 "hdgst": ${hdgst:-false}, 00:37:51.710 "ddgst": ${ddgst:-false} 00:37:51.710 }, 00:37:51.710 "method": "bdev_nvme_attach_controller" 00:37:51.710 } 00:37:51.710 EOF 00:37:51.710 )") 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.710 { 00:37:51.710 "params": { 00:37:51.710 "name": "Nvme$subsystem", 00:37:51.710 "trtype": "$TEST_TRANSPORT", 00:37:51.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.710 "adrfam": "ipv4", 00:37:51.710 "trsvcid": "$NVMF_PORT", 00:37:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.710 "hdgst": ${hdgst:-false}, 00:37:51.710 "ddgst": ${ddgst:-false} 00:37:51.710 }, 00:37:51.710 "method": "bdev_nvme_attach_controller" 00:37:51.710 } 00:37:51.710 EOF 00:37:51.710 )") 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.710 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.710 { 00:37:51.710 "params": { 00:37:51.710 "name": "Nvme$subsystem", 00:37:51.710 "trtype": "$TEST_TRANSPORT", 00:37:51.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.710 "adrfam": "ipv4", 00:37:51.710 "trsvcid": "$NVMF_PORT", 00:37:51.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.710 "hdgst": ${hdgst:-false}, 00:37:51.710 "ddgst": ${ddgst:-false} 00:37:51.710 }, 00:37:51.710 "method": "bdev_nvme_attach_controller" 00:37:51.710 } 00:37:51.710 EOF 00:37:51.710 )") 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.711 { 00:37:51.711 "params": { 00:37:51.711 "name": "Nvme$subsystem", 00:37:51.711 "trtype": "$TEST_TRANSPORT", 00:37:51.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.711 "adrfam": "ipv4", 00:37:51.711 "trsvcid": "$NVMF_PORT", 00:37:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.711 "hdgst": ${hdgst:-false}, 00:37:51.711 "ddgst": ${ddgst:-false} 00:37:51.711 }, 00:37:51.711 "method": "bdev_nvme_attach_controller" 00:37:51.711 } 00:37:51.711 EOF 00:37:51.711 )") 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.711 [2024-06-10 11:46:20.668155] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:37:51.711 [2024-06-10 11:46:20.668205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408297 ] 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.711 { 00:37:51.711 "params": { 00:37:51.711 "name": "Nvme$subsystem", 00:37:51.711 "trtype": "$TEST_TRANSPORT", 00:37:51.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.711 "adrfam": "ipv4", 00:37:51.711 "trsvcid": "$NVMF_PORT", 00:37:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.711 "hdgst": ${hdgst:-false}, 00:37:51.711 "ddgst": ${ddgst:-false} 00:37:51.711 }, 00:37:51.711 "method": "bdev_nvme_attach_controller" 00:37:51.711 } 00:37:51.711 EOF 00:37:51.711 )") 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.711 { 00:37:51.711 "params": { 00:37:51.711 "name": "Nvme$subsystem", 00:37:51.711 "trtype": "$TEST_TRANSPORT", 00:37:51.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.711 "adrfam": "ipv4", 00:37:51.711 "trsvcid": "$NVMF_PORT", 00:37:51.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.711 "hdgst": ${hdgst:-false}, 00:37:51.711 "ddgst": ${ddgst:-false} 00:37:51.711 }, 00:37:51.711 "method": "bdev_nvme_attach_controller" 00:37:51.711 } 00:37:51.711 EOF 00:37:51.711 )") 00:37:51.711 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.972 { 00:37:51.972 "params": { 00:37:51.972 "name": "Nvme$subsystem", 00:37:51.972 "trtype": "$TEST_TRANSPORT", 00:37:51.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.972 "adrfam": "ipv4", 00:37:51.972 "trsvcid": "$NVMF_PORT", 00:37:51.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.972 "hdgst": ${hdgst:-false}, 00:37:51.972 "ddgst": ${ddgst:-false} 00:37:51.972 }, 00:37:51.972 "method": "bdev_nvme_attach_controller" 00:37:51.972 } 00:37:51.972 EOF 00:37:51.972 )") 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:51.972 { 00:37:51.972 "params": { 00:37:51.972 "name": "Nvme$subsystem", 00:37:51.972 "trtype": "$TEST_TRANSPORT", 00:37:51.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:51.972 "adrfam": "ipv4", 00:37:51.972 "trsvcid": "$NVMF_PORT", 00:37:51.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:51.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:51.972 "hdgst": ${hdgst:-false}, 00:37:51.972 "ddgst": ${ddgst:-false} 00:37:51.972 }, 00:37:51.972 "method": "bdev_nvme_attach_controller" 00:37:51.972 } 00:37:51.972 EOF 00:37:51.972 )") 00:37:51.972 EAL: No free 2048 kB hugepages reported on node 1 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:37:51.972 11:46:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:51.972 "params": { 00:37:51.972 "name": "Nvme1", 00:37:51.972 "trtype": "tcp", 00:37:51.972 "traddr": "10.0.0.2", 00:37:51.972 "adrfam": "ipv4", 00:37:51.972 "trsvcid": "4420", 00:37:51.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:51.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:51.972 "hdgst": false, 00:37:51.972 "ddgst": false 00:37:51.972 }, 00:37:51.972 "method": "bdev_nvme_attach_controller" 00:37:51.972 },{ 00:37:51.972 "params": { 00:37:51.972 "name": "Nvme2", 00:37:51.972 "trtype": "tcp", 00:37:51.972 "traddr": "10.0.0.2", 00:37:51.972 "adrfam": "ipv4", 00:37:51.972 "trsvcid": "4420", 00:37:51.972 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:51.972 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:51.972 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme3", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme4", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme5", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme6", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme7", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme8", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme9", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 },{ 00:37:51.973 "params": { 00:37:51.973 "name": "Nvme10", 00:37:51.973 "trtype": "tcp", 00:37:51.973 "traddr": "10.0.0.2", 00:37:51.973 "adrfam": "ipv4", 00:37:51.973 "trsvcid": "4420", 00:37:51.973 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:37:51.973 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:37:51.973 "hdgst": false, 00:37:51.973 "ddgst": false 00:37:51.973 }, 00:37:51.973 "method": "bdev_nvme_attach_controller" 00:37:51.973 }' 00:37:51.973 [2024-06-10 11:46:20.728864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:51.973 [2024-06-10 11:46:20.793740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.363 Running I/O for 10 seconds... 00:37:53.363 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:53.363 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:37:53.363 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:53.363 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.363 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:37:53.626 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:37:53.887 11:46:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=132 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 132 -ge 100 ']' 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2408236 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 2408236 ']' 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 2408236 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2408236 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2408236' 00:37:54.148 killing process with pid 2408236 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 2408236 00:37:54.148 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 2408236 00:37:54.148 [2024-06-10 11:46:23.117459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117547] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117572] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117719] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117728] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.149 [2024-06-10 11:46:23.117832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17020 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.119475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.425 [2024-06-10 11:46:23.119515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.425 [2024-06-10 11:46:23.119531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.425 [2024-06-10 11:46:23.119542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.425 [2024-06-10 11:46:23.119553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.425 [2024-06-10 11:46:23.119565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.425 [2024-06-10 11:46:23.119573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.425 [2024-06-10 11:46:23.119580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.425 [2024-06-10 11:46:23.119588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1423650 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.120982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.425 [2024-06-10 11:46:23.121128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121223] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121385] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121416] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19a20 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.121712] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.426 [2024-06-10 11:46:23.125016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125211] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.426 [2024-06-10 11:46:23.125309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125380] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125393] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125399] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125437] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.125450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17960 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.126994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127000] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.427 [2024-06-10 11:46:23.127115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d17e20 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127899] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.127994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128158] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d182e0 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.128997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.428 [2024-06-10 11:46:23.129248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129319] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129416] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129430] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129468] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129480] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.129531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d18780 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130910] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.130999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.429 [2024-06-10 11:46:23.131129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131174] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d190c0 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.131997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.430 [2024-06-10 11:46:23.132079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132087] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132101] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132142] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.132168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19560 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.141656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e15a0 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.141771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ceb0 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.141858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e440 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.141940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.141991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.141998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144e900 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.142020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28610 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.142104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431c50 is same with the state(5) to be set 00:37:54.431 [2024-06-10 11:46:23.142180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423650 (9): Bad file descriptor 00:37:54.431 [2024-06-10 11:46:23.142203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.431 [2024-06-10 11:46:23.142240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.431 [2024-06-10 11:46:23.142248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eda60 is same with the state(5) to be set 00:37:54.432 [2024-06-10 11:46:23.142284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8080 is same with the state(5) to be set 00:37:54.432 [2024-06-10 11:46:23.142368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:54.432 [2024-06-10 11:46:23.142420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e08b0 is same with the state(5) to be set 00:37:54.432 [2024-06-10 11:46:23.142477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.432 [2024-06-10 11:46:23.142957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.432 [2024-06-10 11:46:23.142968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.142975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.142984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.142991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143571] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1592c40 was disconnected and freed. reset controller. 00:37:54.433 [2024-06-10 11:46:23.143852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.433 [2024-06-10 11:46:23.143935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.433 [2024-06-10 11:46:23.143942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.143951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.143958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.143967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.143974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.143983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.143995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.434 [2024-06-10 11:46:23.144506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.434 [2024-06-10 11:46:23.144515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.144923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.144971] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x141cef0 was disconnected and freed. reset controller. 00:37:54.435 [2024-06-10 11:46:23.147801] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:37:54.435 [2024-06-10 11:46:23.147836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eda60 (9): Bad file descriptor 00:37:54.435 [2024-06-10 11:46:23.148182] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:37:54.435 [2024-06-10 11:46:23.148213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e440 (9): Bad file descriptor 00:37:54.435 [2024-06-10 11:46:23.148534] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.435 [2024-06-10 11:46:23.148577] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.435 [2024-06-10 11:46:23.148608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.435 [2024-06-10 11:46:23.148849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.435 [2024-06-10 11:46:23.148861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.148989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.148995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.436 [2024-06-10 11:46:23.149537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.436 [2024-06-10 11:46:23.149544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.149742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.149751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1596b30 is same with the state(5) to be set 00:37:54.437 [2024-06-10 11:46:23.149794] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1596b30 was disconnected and freed. reset controller. 00:37:54.437 [2024-06-10 11:46:23.149840] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.437 [2024-06-10 11:46:23.150138] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.437 [2024-06-10 11:46:23.150179] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.437 [2024-06-10 11:46:23.150212] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:37:54.437 [2024-06-10 11:46:23.150590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.437 [2024-06-10 11:46:23.150605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15eda60 with addr=10.0.0.2, port=4420 00:37:54.437 [2024-06-10 11:46:23.150613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eda60 is same with the state(5) to be set 00:37:54.437 [2024-06-10 11:46:23.151911] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:37:54.437 [2024-06-10 11:46:23.151933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146ceb0 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.437 [2024-06-10 11:46:23.152290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e440 with addr=10.0.0.2, port=4420 00:37:54.437 [2024-06-10 11:46:23.152298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e440 is same with the state(5) to be set 00:37:54.437 [2024-06-10 11:46:23.152309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eda60 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e15a0 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144e900 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28610 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431c50 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8080 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e08b0 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e440 (9): Bad file descriptor 00:37:54.437 [2024-06-10 11:46:23.152540] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:37:54.437 [2024-06-10 11:46:23.152547] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:37:54.437 [2024-06-10 11:46:23.152555] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:37:54.437 [2024-06-10 11:46:23.152592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.437 [2024-06-10 11:46:23.152879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.437 [2024-06-10 11:46:23.152886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.152992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.438 [2024-06-10 11:46:23.153518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.438 [2024-06-10 11:46:23.153524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.153621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.153629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15152c0 is same with the state(5) to be set 00:37:54.439 [2024-06-10 11:46:23.155162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.155175] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:54.439 [2024-06-10 11:46:23.155420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.439 [2024-06-10 11:46:23.155434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146ceb0 with addr=10.0.0.2, port=4420 00:37:54.439 [2024-06-10 11:46:23.155442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ceb0 is same with the state(5) to be set 00:37:54.439 [2024-06-10 11:46:23.155449] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:37:54.439 [2024-06-10 11:46:23.155456] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:37:54.439 [2024-06-10 11:46:23.155463] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:37:54.439 [2024-06-10 11:46:23.155518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.155763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.439 [2024-06-10 11:46:23.155774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1423650 with addr=10.0.0.2, port=4420 00:37:54.439 [2024-06-10 11:46:23.155782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1423650 is same with the state(5) to be set 00:37:54.439 [2024-06-10 11:46:23.155791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146ceb0 (9): Bad file descriptor 00:37:54.439 [2024-06-10 11:46:23.156094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423650 (9): Bad file descriptor 00:37:54.439 [2024-06-10 11:46:23.156104] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:37:54.439 [2024-06-10 11:46:23.156111] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:37:54.439 [2024-06-10 11:46:23.156117] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:37:54.439 [2024-06-10 11:46:23.156170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.156178] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:54.439 [2024-06-10 11:46:23.156184] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:54.439 [2024-06-10 11:46:23.156190] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:54.439 [2024-06-10 11:46:23.156232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.158317] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:37:54.439 [2024-06-10 11:46:23.158715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.439 [2024-06-10 11:46:23.158728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15eda60 with addr=10.0.0.2, port=4420 00:37:54.439 [2024-06-10 11:46:23.158735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eda60 is same with the state(5) to be set 00:37:54.439 [2024-06-10 11:46:23.158771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eda60 (9): Bad file descriptor 00:37:54.439 [2024-06-10 11:46:23.158805] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:37:54.439 [2024-06-10 11:46:23.158812] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:37:54.439 [2024-06-10 11:46:23.158818] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:37:54.439 [2024-06-10 11:46:23.158855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.160737] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:37:54.439 [2024-06-10 11:46:23.160997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.439 [2024-06-10 11:46:23.161008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e440 with addr=10.0.0.2, port=4420 00:37:54.439 [2024-06-10 11:46:23.161015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e440 is same with the state(5) to be set 00:37:54.439 [2024-06-10 11:46:23.161050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e440 (9): Bad file descriptor 00:37:54.439 [2024-06-10 11:46:23.161085] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:37:54.439 [2024-06-10 11:46:23.161092] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:37:54.439 [2024-06-10 11:46:23.161098] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:37:54.439 [2024-06-10 11:46:23.161135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.439 [2024-06-10 11:46:23.162064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.439 [2024-06-10 11:46:23.162236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.439 [2024-06-10 11:46:23.162243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.440 [2024-06-10 11:46:23.162889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.440 [2024-06-10 11:46:23.162896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.162986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.162993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.163105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.163114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15940d0 is same with the state(5) to be set 00:37:54.441 [2024-06-10 11:46:23.164441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.441 [2024-06-10 11:46:23.164881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.441 [2024-06-10 11:46:23.164888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.164987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.164996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.165487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.165495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1595630 is same with the state(5) to be set 00:37:54.442 [2024-06-10 11:46:23.166763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.166776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.166788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.166797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.442 [2024-06-10 11:46:23.166811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.442 [2024-06-10 11:46:23.166819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.166989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.166998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.443 [2024-06-10 11:46:23.167374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.443 [2024-06-10 11:46:23.167384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.167813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.167821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141b9f0 is same with the state(5) to be set 00:37:54.444 [2024-06-10 11:46:23.169093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.444 [2024-06-10 11:46:23.169305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.444 [2024-06-10 11:46:23.169314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.445 [2024-06-10 11:46:23.169953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.445 [2024-06-10 11:46:23.169962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.169969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.169978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.169984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.169993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.170129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.170137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141e3f0 is same with the state(5) to be set 00:37:54.446 [2024-06-10 11:46:23.171399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.446 [2024-06-10 11:46:23.171874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.446 [2024-06-10 11:46:23.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.171985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.171996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.447 [2024-06-10 11:46:23.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.447 [2024-06-10 11:46:23.172430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.172437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.172445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141f6f0 is same with the state(5) to be set 00:37:54.448 [2024-06-10 11:46:23.173976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.173995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.448 [2024-06-10 11:46:23.174609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.448 [2024-06-10 11:46:23.174618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.174988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.174997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.175003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.175013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:54.449 [2024-06-10 11:46:23.175019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:54.449 [2024-06-10 11:46:23.175027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150daa0 is same with the state(5) to be set 00:37:54.449 [2024-06-10 11:46:23.176516] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:37:54.449 [2024-06-10 11:46:23.176537] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:37:54.449 [2024-06-10 11:46:23.176546] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:37:54.449 [2024-06-10 11:46:23.176555] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:37:54.449 [2024-06-10 11:46:23.176635] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.449 [2024-06-10 11:46:23.176652] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.449 [2024-06-10 11:46:23.176722] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:37:54.449 task offset: 24576 on job bdev=Nvme2n1 fails 00:37:54.449 00:37:54.449 Latency(us) 00:37:54.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.449 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme1n1 ended in about 0.94 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme1n1 : 0.94 137.92 8.62 67.90 0.00 307466.59 18896.21 260396.37 00:37:54.449 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme2n1 ended in about 0.93 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme2n1 : 0.93 205.56 12.85 68.52 0.00 225948.91 4423.68 255153.49 00:37:54.449 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme3n1 ended in about 0.95 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme3n1 : 0.95 201.68 12.60 67.23 0.00 225602.35 17367.04 255153.49 00:37:54.449 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme4n1 ended in about 0.95 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme4n1 : 0.95 201.18 12.57 67.06 0.00 221355.52 19442.35 249910.61 00:37:54.449 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme5n1 ended in about 0.94 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme5n1 : 0.94 208.60 13.04 68.12 0.00 209553.25 20862.29 251658.24 00:37:54.449 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme6n1 ended in about 0.96 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme6n1 : 0.96 133.79 8.36 66.90 0.00 283080.82 26214.40 276125.01 00:37:54.449 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme7n1 ended in about 0.94 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme7n1 : 0.94 205.26 12.83 68.42 0.00 202114.88 5270.19 255153.49 00:37:54.449 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme8n1 ended in about 0.96 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme8n1 : 0.96 133.47 8.34 66.74 0.00 271090.92 19879.25 246415.36 00:37:54.449 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme9n1 ended in about 0.96 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme9n1 : 0.96 133.15 8.32 66.57 0.00 265604.55 15182.51 260396.37 00:37:54.449 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:54.449 Job: Nvme10n1 ended in about 0.96 seconds with error 00:37:54.449 Verification LBA range: start 0x0 length 0x400 00:37:54.449 Nvme10n1 : 0.96 132.80 8.30 66.40 0.00 260054.76 23811.41 283115.52 00:37:54.449 =================================================================================================================== 00:37:54.450 Total : 1693.42 105.84 673.85 0.00 242860.85 4423.68 283115.52 00:37:54.450 [2024-06-10 11:46:23.200400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:54.450 [2024-06-10 11:46:23.200439] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:37:54.450 [2024-06-10 11:46:23.200767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.200784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144e900 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.200794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144e900 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.201171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.201181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1431c50 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.201188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1431c50 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.201548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.201558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf28610 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.201565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf28610 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.201915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.201925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15c8080 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.201932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c8080 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.203531] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:37:54.450 [2024-06-10 11:46:23.203544] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:54.450 [2024-06-10 11:46:23.203560] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:37:54.450 [2024-06-10 11:46:23.203568] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:37:54.450 [2024-06-10 11:46:23.203883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.203896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e15a0 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.203903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e15a0 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.204238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.204248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e08b0 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.204255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e08b0 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.204266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144e900 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.204277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431c50 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.204286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf28610 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.204295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8080 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.204326] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.450 [2024-06-10 11:46:23.204337] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.450 [2024-06-10 11:46:23.204349] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.450 [2024-06-10 11:46:23.204360] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:54.450 [2024-06-10 11:46:23.204788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.204800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146ceb0 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.204807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146ceb0 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.205171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.205181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1423650 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.205188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1423650 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.205414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.205423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15eda60 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.205430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15eda60 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.205662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.450 [2024-06-10 11:46:23.205675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x146e440 with addr=10.0.0.2, port=4420 00:37:54.450 [2024-06-10 11:46:23.205683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x146e440 is same with the state(5) to be set 00:37:54.450 [2024-06-10 11:46:23.205692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e15a0 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e08b0 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205713] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205719] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205727] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205738] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205744] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205751] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205761] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205767] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205774] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205785] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205791] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205798] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.450 [2024-06-10 11:46:23.205876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.450 [2024-06-10 11:46:23.205882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.450 [2024-06-10 11:46:23.205888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.450 [2024-06-10 11:46:23.205895] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146ceb0 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423650 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15eda60 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146e440 (9): Bad file descriptor 00:37:54.450 [2024-06-10 11:46:23.205930] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205937] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205943] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205952] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:37:54.450 [2024-06-10 11:46:23.205958] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:37:54.450 [2024-06-10 11:46:23.205964] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:37:54.450 [2024-06-10 11:46:23.205991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.451 [2024-06-10 11:46:23.205997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.451 [2024-06-10 11:46:23.206003] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:37:54.451 [2024-06-10 11:46:23.206010] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:37:54.451 [2024-06-10 11:46:23.206019] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:37:54.451 [2024-06-10 11:46:23.206028] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:54.451 [2024-06-10 11:46:23.206034] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:54.451 [2024-06-10 11:46:23.206041] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:54.451 [2024-06-10 11:46:23.206050] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:37:54.451 [2024-06-10 11:46:23.206057] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:37:54.451 [2024-06-10 11:46:23.206063] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:37:54.451 [2024-06-10 11:46:23.206072] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:37:54.451 [2024-06-10 11:46:23.206078] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:37:54.451 [2024-06-10 11:46:23.206084] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:37:54.451 [2024-06-10 11:46:23.206113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.451 [2024-06-10 11:46:23.206121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.451 [2024-06-10 11:46:23.206127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.451 [2024-06-10 11:46:23.206132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:54.712 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:37:54.712 11:46:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2408297 00:37:55.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2408297) - No such process 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:55.656 rmmod nvme_tcp 00:37:55.656 rmmod nvme_fabrics 00:37:55.656 rmmod nvme_keyring 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:55.656 11:46:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.205 11:46:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:58.205 00:37:58.205 real 0m7.156s 00:37:58.205 user 0m16.468s 00:37:58.205 sys 0m1.154s 00:37:58.205 11:46:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:58.205 11:46:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:37:58.205 ************************************ 00:37:58.205 END TEST nvmf_shutdown_tc3 00:37:58.205 ************************************ 00:37:58.205 11:46:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:37:58.205 00:37:58.205 real 0m31.141s 00:37:58.205 user 1m10.606s 00:37:58.206 sys 0m9.220s 00:37:58.206 11:46:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:58.206 11:46:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:58.206 ************************************ 00:37:58.206 END TEST nvmf_shutdown 00:37:58.206 ************************************ 00:37:58.206 11:46:26 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:58.206 11:46:26 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:58.206 11:46:26 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:37:58.206 11:46:26 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:58.206 11:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:58.206 ************************************ 00:37:58.206 START TEST nvmf_multicontroller 00:37:58.206 ************************************ 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:37:58.206 * Looking for test storage... 00:37:58.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:37:58.206 11:46:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:04.795 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.795 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:05.068 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:05.068 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:05.068 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:05.068 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:05.068 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:05.069 11:46:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:05.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:38:05.330 00:38:05.330 --- 10.0.0.2 ping statistics --- 00:38:05.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.330 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:38:05.330 00:38:05.330 --- 10.0.0.1 ping statistics --- 00:38:05.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.330 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2413344 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2413344 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2413344 ']' 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:05.330 11:46:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:05.330 [2024-06-10 11:46:34.191911] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:05.331 [2024-06-10 11:46:34.191980] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:05.331 EAL: No free 2048 kB hugepages reported on node 1 00:38:05.331 [2024-06-10 11:46:34.261989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:05.591 [2024-06-10 11:46:34.334919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:05.591 [2024-06-10 11:46:34.334956] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:05.591 [2024-06-10 11:46:34.334964] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:05.591 [2024-06-10 11:46:34.334970] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:05.591 [2024-06-10 11:46:34.334976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:05.591 [2024-06-10 11:46:34.335081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:05.591 [2024-06-10 11:46:34.335238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.591 [2024-06-10 11:46:34.335239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.162 [2024-06-10 11:46:35.107299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.162 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 Malloc0 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 [2024-06-10 11:46:35.173012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 [2024-06-10 11:46:35.184973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 Malloc1 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2413511 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2413511 /var/tmp/bdevperf.sock 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2413511 ']' 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:06.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:06.423 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 NVMe0n1 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.684 1 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.684 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 request: 00:38:06.684 { 00:38:06.684 "name": "NVMe0", 00:38:06.684 "trtype": "tcp", 00:38:06.684 "traddr": "10.0.0.2", 00:38:06.684 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:38:06.684 "hostaddr": "10.0.0.2", 00:38:06.684 "hostsvcid": "60000", 00:38:06.684 "adrfam": "ipv4", 00:38:06.684 "trsvcid": "4420", 00:38:06.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.685 "method": "bdev_nvme_attach_controller", 00:38:06.685 "req_id": 1 00:38:06.685 } 00:38:06.946 Got JSON-RPC error response 00:38:06.946 response: 00:38:06.946 { 00:38:06.946 "code": -114, 00:38:06.946 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:38:06.946 } 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.946 request: 00:38:06.946 { 00:38:06.946 "name": "NVMe0", 00:38:06.946 "trtype": "tcp", 00:38:06.946 "traddr": "10.0.0.2", 00:38:06.946 "hostaddr": "10.0.0.2", 00:38:06.946 "hostsvcid": "60000", 00:38:06.946 "adrfam": "ipv4", 00:38:06.946 "trsvcid": "4420", 00:38:06.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:06.946 "method": "bdev_nvme_attach_controller", 00:38:06.946 "req_id": 1 00:38:06.946 } 00:38:06.946 Got JSON-RPC error response 00:38:06.946 response: 00:38:06.946 { 00:38:06.946 "code": -114, 00:38:06.946 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:38:06.946 } 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.946 request: 00:38:06.946 { 00:38:06.946 "name": "NVMe0", 00:38:06.946 "trtype": "tcp", 00:38:06.946 "traddr": "10.0.0.2", 00:38:06.946 "hostaddr": "10.0.0.2", 00:38:06.946 "hostsvcid": "60000", 00:38:06.946 "adrfam": "ipv4", 00:38:06.946 "trsvcid": "4420", 00:38:06.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.946 "multipath": "disable", 00:38:06.946 "method": "bdev_nvme_attach_controller", 00:38:06.946 "req_id": 1 00:38:06.946 } 00:38:06.946 Got JSON-RPC error response 00:38:06.946 response: 00:38:06.946 { 00:38:06.946 "code": -114, 00:38:06.946 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:38:06.946 } 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.946 request: 00:38:06.946 { 00:38:06.946 "name": "NVMe0", 00:38:06.946 "trtype": "tcp", 00:38:06.946 "traddr": "10.0.0.2", 00:38:06.946 "hostaddr": "10.0.0.2", 00:38:06.946 "hostsvcid": "60000", 00:38:06.946 "adrfam": "ipv4", 00:38:06.946 "trsvcid": "4420", 00:38:06.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.946 "multipath": "failover", 00:38:06.946 "method": "bdev_nvme_attach_controller", 00:38:06.946 "req_id": 1 00:38:06.946 } 00:38:06.946 Got JSON-RPC error response 00:38:06.946 response: 00:38:06.946 { 00:38:06.946 "code": -114, 00:38:06.946 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:38:06.946 } 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.946 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:06.946 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:07.207 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:38:07.207 11:46:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:08.149 0 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2413511 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2413511 ']' 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2413511 00:38:08.149 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2413511 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2413511' 00:38:08.410 killing process with pid 2413511 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2413511 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2413511 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:38:08.410 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:38:08.410 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:38:08.410 [2024-06-10 11:46:35.302265] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:08.410 [2024-06-10 11:46:35.302322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413511 ] 00:38:08.410 EAL: No free 2048 kB hugepages reported on node 1 00:38:08.410 [2024-06-10 11:46:35.361490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.410 [2024-06-10 11:46:35.426279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.410 [2024-06-10 11:46:35.942728] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 8a24e554-f132-48e3-a9ae-799acfe26a10 already exists 00:38:08.410 [2024-06-10 11:46:35.942758] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:8a24e554-f132-48e3-a9ae-799acfe26a10 alias for bdev NVMe1n1 00:38:08.410 [2024-06-10 11:46:35.942768] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:38:08.410 Running I/O for 1 seconds... 00:38:08.410 00:38:08.410 Latency(us) 00:38:08.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.410 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:38:08.410 NVMe0n1 : 1.01 20461.82 79.93 0.00 0.00 6245.44 5406.72 15400.96 00:38:08.410 =================================================================================================================== 00:38:08.410 Total : 20461.82 79.93 0.00 0.00 6245.44 5406.72 15400.96 00:38:08.410 Received shutdown signal, test time was about 1.000000 seconds 00:38:08.410 00:38:08.411 Latency(us) 00:38:08.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.411 =================================================================================================================== 00:38:08.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:08.411 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:08.411 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:08.411 rmmod nvme_tcp 00:38:08.411 rmmod nvme_fabrics 00:38:08.671 rmmod nvme_keyring 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2413344 ']' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2413344 ']' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2413344' 00:38:08.671 killing process with pid 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2413344 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:08.671 11:46:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.216 11:46:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:11.216 00:38:11.216 real 0m12.959s 00:38:11.216 user 0m14.396s 00:38:11.216 sys 0m5.972s 00:38:11.216 11:46:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:11.216 11:46:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:38:11.216 ************************************ 00:38:11.216 END TEST nvmf_multicontroller 00:38:11.216 ************************************ 00:38:11.216 11:46:39 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:11.216 11:46:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:11.216 11:46:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:11.216 11:46:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:11.216 ************************************ 00:38:11.216 START TEST nvmf_aer 00:38:11.216 ************************************ 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:38:11.216 * Looking for test storage... 00:38:11.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:38:11.216 11:46:39 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:38:11.217 11:46:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:17.805 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:17.805 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:17.805 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:17.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:17.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:17.806 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:18.067 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:18.067 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:18.067 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:18.067 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:18.067 11:46:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:18.067 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:18.067 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:18.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:18.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:38:18.067 00:38:18.067 --- 10.0.0.2 ping statistics --- 00:38:18.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.067 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:38:18.067 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:18.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:18.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:38:18.067 00:38:18.067 --- 10.0.0.1 ping statistics --- 00:38:18.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.067 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:38:18.067 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:18.067 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2418047 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2418047 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 2418047 ']' 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:18.328 11:46:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:18.328 [2024-06-10 11:46:47.139158] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:18.328 [2024-06-10 11:46:47.139220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.328 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.328 [2024-06-10 11:46:47.209554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:18.328 [2024-06-10 11:46:47.285178] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.328 [2024-06-10 11:46:47.285218] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.328 [2024-06-10 11:46:47.285226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.328 [2024-06-10 11:46:47.285232] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.328 [2024-06-10 11:46:47.285238] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.328 [2024-06-10 11:46:47.285372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.328 [2024-06-10 11:46:47.285513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:18.328 [2024-06-10 11:46:47.285692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.328 [2024-06-10 11:46:47.285693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 [2024-06-10 11:46:48.057524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 Malloc0 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 [2024-06-10 11:46:48.116904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.270 [ 00:38:19.270 { 00:38:19.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:19.270 "subtype": "Discovery", 00:38:19.270 "listen_addresses": [], 00:38:19.270 "allow_any_host": true, 00:38:19.270 "hosts": [] 00:38:19.270 }, 00:38:19.270 { 00:38:19.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:19.270 "subtype": "NVMe", 00:38:19.270 "listen_addresses": [ 00:38:19.270 { 00:38:19.270 "trtype": "TCP", 00:38:19.270 "adrfam": "IPv4", 00:38:19.270 "traddr": "10.0.0.2", 00:38:19.270 "trsvcid": "4420" 00:38:19.270 } 00:38:19.270 ], 00:38:19.270 "allow_any_host": true, 00:38:19.270 "hosts": [], 00:38:19.270 "serial_number": "SPDK00000000000001", 00:38:19.270 "model_number": "SPDK bdev Controller", 00:38:19.270 "max_namespaces": 2, 00:38:19.270 "min_cntlid": 1, 00:38:19.270 "max_cntlid": 65519, 00:38:19.270 "namespaces": [ 00:38:19.270 { 00:38:19.270 "nsid": 1, 00:38:19.270 "bdev_name": "Malloc0", 00:38:19.270 "name": "Malloc0", 00:38:19.270 "nguid": "54C7421BFEC944D697C1A3385634D156", 00:38:19.270 "uuid": "54c7421b-fec9-44d6-97c1-a3385634d156" 00:38:19.270 } 00:38:19.270 ] 00:38:19.270 } 00:38:19.270 ] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2418400 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:38:19.270 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:38:19.270 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:38:19.530 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 Malloc1 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 Asynchronous Event Request test 00:38:19.531 Attaching to 10.0.0.2 00:38:19.531 Attached to 10.0.0.2 00:38:19.531 Registering asynchronous event callbacks... 00:38:19.531 Starting namespace attribute notice tests for all controllers... 00:38:19.531 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:38:19.531 aer_cb - Changed Namespace 00:38:19.531 Cleaning up... 00:38:19.531 [ 00:38:19.531 { 00:38:19.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:19.531 "subtype": "Discovery", 00:38:19.531 "listen_addresses": [], 00:38:19.531 "allow_any_host": true, 00:38:19.531 "hosts": [] 00:38:19.531 }, 00:38:19.531 { 00:38:19.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:19.531 "subtype": "NVMe", 00:38:19.531 "listen_addresses": [ 00:38:19.531 { 00:38:19.531 "trtype": "TCP", 00:38:19.531 "adrfam": "IPv4", 00:38:19.531 "traddr": "10.0.0.2", 00:38:19.531 "trsvcid": "4420" 00:38:19.531 } 00:38:19.531 ], 00:38:19.531 "allow_any_host": true, 00:38:19.531 "hosts": [], 00:38:19.531 "serial_number": "SPDK00000000000001", 00:38:19.531 "model_number": "SPDK bdev Controller", 00:38:19.531 "max_namespaces": 2, 00:38:19.531 "min_cntlid": 1, 00:38:19.531 "max_cntlid": 65519, 00:38:19.531 "namespaces": [ 00:38:19.531 { 00:38:19.531 "nsid": 1, 00:38:19.531 "bdev_name": "Malloc0", 00:38:19.531 "name": "Malloc0", 00:38:19.531 "nguid": "54C7421BFEC944D697C1A3385634D156", 00:38:19.531 "uuid": "54c7421b-fec9-44d6-97c1-a3385634d156" 00:38:19.531 }, 00:38:19.531 { 00:38:19.531 "nsid": 2, 00:38:19.531 "bdev_name": "Malloc1", 00:38:19.531 "name": "Malloc1", 00:38:19.531 "nguid": "E98BE227A33D45FBA584E07E8180D817", 00:38:19.531 "uuid": "e98be227-a33d-45fb-a584-e07e8180d817" 00:38:19.531 } 00:38:19.531 ] 00:38:19.531 } 00:38:19.531 ] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2418400 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:19.531 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:19.531 rmmod nvme_tcp 00:38:19.531 rmmod nvme_fabrics 00:38:19.792 rmmod nvme_keyring 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2418047 ']' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 2418047 ']' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2418047' 00:38:19.792 killing process with pid 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 2418047 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:19.792 11:46:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.342 11:46:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:22.342 00:38:22.342 real 0m11.011s 00:38:22.342 user 0m7.943s 00:38:22.342 sys 0m5.733s 00:38:22.342 11:46:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:22.342 11:46:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:38:22.342 ************************************ 00:38:22.342 END TEST nvmf_aer 00:38:22.342 ************************************ 00:38:22.342 11:46:50 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:22.342 11:46:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:22.342 11:46:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:22.342 11:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:22.342 ************************************ 00:38:22.342 START TEST nvmf_async_init 00:38:22.342 ************************************ 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:38:22.342 * Looking for test storage... 00:38:22.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.342 11:46:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.342 11:46:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.342 11:46:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.342 11:46:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=73cf150c9bde4338a341b00eb9d6b191 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:38:22.343 11:46:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:30.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:30.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:30.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:30.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:30.516 11:46:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:30.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:38:30.516 00:38:30.516 --- 10.0.0.2 ping statistics --- 00:38:30.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.516 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:38:30.516 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:30.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:38:30.516 00:38:30.516 --- 10.0.0.1 ping statistics --- 00:38:30.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.516 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2422625 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2422625 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 2422625 ']' 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [2024-06-10 11:46:58.377316] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:30.517 [2024-06-10 11:46:58.377366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:30.517 EAL: No free 2048 kB hugepages reported on node 1 00:38:30.517 [2024-06-10 11:46:58.445830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.517 [2024-06-10 11:46:58.509999] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:30.517 [2024-06-10 11:46:58.510034] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:30.517 [2024-06-10 11:46:58.510041] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:30.517 [2024-06-10 11:46:58.510048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:30.517 [2024-06-10 11:46:58.510053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:30.517 [2024-06-10 11:46:58.510077] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [2024-06-10 11:46:58.631130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 null0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 73cf150c9bde4338a341b00eb9d6b191 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [2024-06-10 11:46:58.687397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 nvme0n1 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [ 00:38:30.517 { 00:38:30.517 "name": "nvme0n1", 00:38:30.517 "aliases": [ 00:38:30.517 "73cf150c-9bde-4338-a341-b00eb9d6b191" 00:38:30.517 ], 00:38:30.517 "product_name": "NVMe disk", 00:38:30.517 "block_size": 512, 00:38:30.517 "num_blocks": 2097152, 00:38:30.517 "uuid": "73cf150c-9bde-4338-a341-b00eb9d6b191", 00:38:30.517 "assigned_rate_limits": { 00:38:30.517 "rw_ios_per_sec": 0, 00:38:30.517 "rw_mbytes_per_sec": 0, 00:38:30.517 "r_mbytes_per_sec": 0, 00:38:30.517 "w_mbytes_per_sec": 0 00:38:30.517 }, 00:38:30.517 "claimed": false, 00:38:30.517 "zoned": false, 00:38:30.517 "supported_io_types": { 00:38:30.517 "read": true, 00:38:30.517 "write": true, 00:38:30.517 "unmap": false, 00:38:30.517 "write_zeroes": true, 00:38:30.517 "flush": true, 00:38:30.517 "reset": true, 00:38:30.517 "compare": true, 00:38:30.517 "compare_and_write": true, 00:38:30.517 "abort": true, 00:38:30.517 "nvme_admin": true, 00:38:30.517 "nvme_io": true 00:38:30.517 }, 00:38:30.517 "memory_domains": [ 00:38:30.517 { 00:38:30.517 "dma_device_id": "system", 00:38:30.517 "dma_device_type": 1 00:38:30.517 } 00:38:30.517 ], 00:38:30.517 "driver_specific": { 00:38:30.517 "nvme": [ 00:38:30.517 { 00:38:30.517 "trid": { 00:38:30.517 "trtype": "TCP", 00:38:30.517 "adrfam": "IPv4", 00:38:30.517 "traddr": "10.0.0.2", 00:38:30.517 "trsvcid": "4420", 00:38:30.517 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:30.517 }, 00:38:30.517 "ctrlr_data": { 00:38:30.517 "cntlid": 1, 00:38:30.517 "vendor_id": "0x8086", 00:38:30.517 "model_number": "SPDK bdev Controller", 00:38:30.517 "serial_number": "00000000000000000000", 00:38:30.517 "firmware_revision": "24.09", 00:38:30.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.517 "oacs": { 00:38:30.517 "security": 0, 00:38:30.517 "format": 0, 00:38:30.517 "firmware": 0, 00:38:30.517 "ns_manage": 0 00:38:30.517 }, 00:38:30.517 "multi_ctrlr": true, 00:38:30.517 "ana_reporting": false 00:38:30.517 }, 00:38:30.517 "vs": { 00:38:30.517 "nvme_version": "1.3" 00:38:30.517 }, 00:38:30.517 "ns_data": { 00:38:30.517 "id": 1, 00:38:30.517 "can_share": true 00:38:30.517 } 00:38:30.517 } 00:38:30.517 ], 00:38:30.517 "mp_policy": "active_passive" 00:38:30.517 } 00:38:30.517 } 00:38:30.517 ] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [2024-06-10 11:46:58.951919] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:30.517 [2024-06-10 11:46:58.951983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f28b20 (9): Bad file descriptor 00:38:30.517 [2024-06-10 11:46:59.083761] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:30.517 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.517 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:30.517 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.517 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.517 [ 00:38:30.517 { 00:38:30.517 "name": "nvme0n1", 00:38:30.517 "aliases": [ 00:38:30.517 "73cf150c-9bde-4338-a341-b00eb9d6b191" 00:38:30.517 ], 00:38:30.517 "product_name": "NVMe disk", 00:38:30.517 "block_size": 512, 00:38:30.518 "num_blocks": 2097152, 00:38:30.518 "uuid": "73cf150c-9bde-4338-a341-b00eb9d6b191", 00:38:30.518 "assigned_rate_limits": { 00:38:30.518 "rw_ios_per_sec": 0, 00:38:30.518 "rw_mbytes_per_sec": 0, 00:38:30.518 "r_mbytes_per_sec": 0, 00:38:30.518 "w_mbytes_per_sec": 0 00:38:30.518 }, 00:38:30.518 "claimed": false, 00:38:30.518 "zoned": false, 00:38:30.518 "supported_io_types": { 00:38:30.518 "read": true, 00:38:30.518 "write": true, 00:38:30.518 "unmap": false, 00:38:30.518 "write_zeroes": true, 00:38:30.518 "flush": true, 00:38:30.518 "reset": true, 00:38:30.518 "compare": true, 00:38:30.518 "compare_and_write": true, 00:38:30.518 "abort": true, 00:38:30.518 "nvme_admin": true, 00:38:30.518 "nvme_io": true 00:38:30.518 }, 00:38:30.518 "memory_domains": [ 00:38:30.518 { 00:38:30.518 "dma_device_id": "system", 00:38:30.518 "dma_device_type": 1 00:38:30.518 } 00:38:30.518 ], 00:38:30.518 "driver_specific": { 00:38:30.518 "nvme": [ 00:38:30.518 { 00:38:30.518 "trid": { 00:38:30.518 "trtype": "TCP", 00:38:30.518 "adrfam": "IPv4", 00:38:30.518 "traddr": "10.0.0.2", 00:38:30.518 "trsvcid": "4420", 00:38:30.518 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:30.518 }, 00:38:30.518 "ctrlr_data": { 00:38:30.518 "cntlid": 2, 00:38:30.518 "vendor_id": "0x8086", 00:38:30.518 "model_number": "SPDK bdev Controller", 00:38:30.518 "serial_number": "00000000000000000000", 00:38:30.518 "firmware_revision": "24.09", 00:38:30.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.518 "oacs": { 00:38:30.518 "security": 0, 00:38:30.518 "format": 0, 00:38:30.518 "firmware": 0, 00:38:30.518 "ns_manage": 0 00:38:30.518 }, 00:38:30.518 "multi_ctrlr": true, 00:38:30.518 "ana_reporting": false 00:38:30.518 }, 00:38:30.518 "vs": { 00:38:30.518 "nvme_version": "1.3" 00:38:30.518 }, 00:38:30.518 "ns_data": { 00:38:30.518 "id": 1, 00:38:30.518 "can_share": true 00:38:30.518 } 00:38:30.518 } 00:38:30.518 ], 00:38:30.518 "mp_policy": "active_passive" 00:38:30.518 } 00:38:30.518 } 00:38:30.518 ] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.X5fTpFOke6 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.X5fTpFOke6 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 [2024-06-10 11:46:59.148516] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:30.518 [2024-06-10 11:46:59.148630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X5fTpFOke6 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 [2024-06-10 11:46:59.160544] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.X5fTpFOke6 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 [2024-06-10 11:46:59.172579] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:30.518 [2024-06-10 11:46:59.172614] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:38:30.518 nvme0n1 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 [ 00:38:30.518 { 00:38:30.518 "name": "nvme0n1", 00:38:30.518 "aliases": [ 00:38:30.518 "73cf150c-9bde-4338-a341-b00eb9d6b191" 00:38:30.518 ], 00:38:30.518 "product_name": "NVMe disk", 00:38:30.518 "block_size": 512, 00:38:30.518 "num_blocks": 2097152, 00:38:30.518 "uuid": "73cf150c-9bde-4338-a341-b00eb9d6b191", 00:38:30.518 "assigned_rate_limits": { 00:38:30.518 "rw_ios_per_sec": 0, 00:38:30.518 "rw_mbytes_per_sec": 0, 00:38:30.518 "r_mbytes_per_sec": 0, 00:38:30.518 "w_mbytes_per_sec": 0 00:38:30.518 }, 00:38:30.518 "claimed": false, 00:38:30.518 "zoned": false, 00:38:30.518 "supported_io_types": { 00:38:30.518 "read": true, 00:38:30.518 "write": true, 00:38:30.518 "unmap": false, 00:38:30.518 "write_zeroes": true, 00:38:30.518 "flush": true, 00:38:30.518 "reset": true, 00:38:30.518 "compare": true, 00:38:30.518 "compare_and_write": true, 00:38:30.518 "abort": true, 00:38:30.518 "nvme_admin": true, 00:38:30.518 "nvme_io": true 00:38:30.518 }, 00:38:30.518 "memory_domains": [ 00:38:30.518 { 00:38:30.518 "dma_device_id": "system", 00:38:30.518 "dma_device_type": 1 00:38:30.518 } 00:38:30.518 ], 00:38:30.518 "driver_specific": { 00:38:30.518 "nvme": [ 00:38:30.518 { 00:38:30.518 "trid": { 00:38:30.518 "trtype": "TCP", 00:38:30.518 "adrfam": "IPv4", 00:38:30.518 "traddr": "10.0.0.2", 00:38:30.518 "trsvcid": "4421", 00:38:30.518 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:30.518 }, 00:38:30.518 "ctrlr_data": { 00:38:30.518 "cntlid": 3, 00:38:30.518 "vendor_id": "0x8086", 00:38:30.518 "model_number": "SPDK bdev Controller", 00:38:30.518 "serial_number": "00000000000000000000", 00:38:30.518 "firmware_revision": "24.09", 00:38:30.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.518 "oacs": { 00:38:30.518 "security": 0, 00:38:30.518 "format": 0, 00:38:30.518 "firmware": 0, 00:38:30.518 "ns_manage": 0 00:38:30.518 }, 00:38:30.518 "multi_ctrlr": true, 00:38:30.518 "ana_reporting": false 00:38:30.518 }, 00:38:30.518 "vs": { 00:38:30.518 "nvme_version": "1.3" 00:38:30.518 }, 00:38:30.518 "ns_data": { 00:38:30.518 "id": 1, 00:38:30.518 "can_share": true 00:38:30.518 } 00:38:30.518 } 00:38:30.518 ], 00:38:30.518 "mp_policy": "active_passive" 00:38:30.518 } 00:38:30.518 } 00:38:30.518 ] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.X5fTpFOke6 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:30.518 rmmod nvme_tcp 00:38:30.518 rmmod nvme_fabrics 00:38:30.518 rmmod nvme_keyring 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2422625 ']' 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2422625 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 2422625 ']' 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 2422625 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2422625 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:30.518 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:30.519 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2422625' 00:38:30.519 killing process with pid 2422625 00:38:30.519 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 2422625 00:38:30.519 [2024-06-10 11:46:59.407325] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:38:30.519 [2024-06-10 11:46:59.407350] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:30.519 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 2422625 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:30.780 11:46:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:32.696 11:47:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:32.696 00:38:32.696 real 0m10.734s 00:38:32.696 user 0m3.268s 00:38:32.696 sys 0m5.821s 00:38:32.696 11:47:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:32.696 11:47:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:38:32.696 ************************************ 00:38:32.696 END TEST nvmf_async_init 00:38:32.696 ************************************ 00:38:32.696 11:47:01 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:32.696 11:47:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:32.696 11:47:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:32.696 11:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.957 ************************************ 00:38:32.957 START TEST dma 00:38:32.957 ************************************ 00:38:32.957 11:47:01 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:38:32.957 * Looking for test storage... 00:38:32.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:32.957 11:47:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:32.957 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:32.957 11:47:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:32.957 11:47:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:32.957 11:47:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:32.957 11:47:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.957 11:47:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.958 11:47:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.958 11:47:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:38:32.958 11:47:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:32.958 11:47:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:32.958 11:47:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:38:32.958 11:47:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:38:32.958 00:38:32.958 real 0m0.121s 00:38:32.958 user 0m0.066s 00:38:32.958 sys 0m0.064s 00:38:32.958 11:47:01 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:32.958 11:47:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:38:32.958 ************************************ 00:38:32.958 END TEST dma 00:38:32.958 ************************************ 00:38:32.958 11:47:01 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:32.958 11:47:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:32.958 11:47:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:32.958 11:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:32.958 ************************************ 00:38:32.958 START TEST nvmf_identify 00:38:32.958 ************************************ 00:38:32.958 11:47:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:38:33.220 * Looking for test storage... 00:38:33.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.220 11:47:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:38:33.220 11:47:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:41.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:41.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:41.365 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:41.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:41.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:41.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:38:41.366 00:38:41.366 --- 10.0.0.2 ping statistics --- 00:38:41.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.366 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:41.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:38:41.366 00:38:41.366 --- 10.0.0.1 ping statistics --- 00:38:41.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.366 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2427108 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2427108 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 2427108 ']' 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:41.366 11:47:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.366 [2024-06-10 11:47:09.443398] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:41.366 [2024-06-10 11:47:09.443506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:41.366 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.366 [2024-06-10 11:47:09.516111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:41.366 [2024-06-10 11:47:09.592184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:41.366 [2024-06-10 11:47:09.592224] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:41.366 [2024-06-10 11:47:09.592234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:41.366 [2024-06-10 11:47:09.592241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:41.366 [2024-06-10 11:47:09.592246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:41.366 [2024-06-10 11:47:09.592358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:41.366 [2024-06-10 11:47:09.592479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:41.366 [2024-06-10 11:47:09.592641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.366 [2024-06-10 11:47:09.592641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.366 [2024-06-10 11:47:10.309352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:41.366 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 Malloc0 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 [2024-06-10 11:47:10.408803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.630 [ 00:38:41.630 { 00:38:41.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:41.630 "subtype": "Discovery", 00:38:41.630 "listen_addresses": [ 00:38:41.630 { 00:38:41.630 "trtype": "TCP", 00:38:41.630 "adrfam": "IPv4", 00:38:41.630 "traddr": "10.0.0.2", 00:38:41.630 "trsvcid": "4420" 00:38:41.630 } 00:38:41.630 ], 00:38:41.630 "allow_any_host": true, 00:38:41.630 "hosts": [] 00:38:41.630 }, 00:38:41.630 { 00:38:41.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:41.630 "subtype": "NVMe", 00:38:41.630 "listen_addresses": [ 00:38:41.630 { 00:38:41.630 "trtype": "TCP", 00:38:41.630 "adrfam": "IPv4", 00:38:41.630 "traddr": "10.0.0.2", 00:38:41.630 "trsvcid": "4420" 00:38:41.630 } 00:38:41.630 ], 00:38:41.630 "allow_any_host": true, 00:38:41.630 "hosts": [], 00:38:41.630 "serial_number": "SPDK00000000000001", 00:38:41.630 "model_number": "SPDK bdev Controller", 00:38:41.630 "max_namespaces": 32, 00:38:41.630 "min_cntlid": 1, 00:38:41.630 "max_cntlid": 65519, 00:38:41.630 "namespaces": [ 00:38:41.630 { 00:38:41.630 "nsid": 1, 00:38:41.630 "bdev_name": "Malloc0", 00:38:41.630 "name": "Malloc0", 00:38:41.630 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:38:41.630 "eui64": "ABCDEF0123456789", 00:38:41.630 "uuid": "afe64faa-31b2-4012-8797-57cdc35b4a5b" 00:38:41.630 } 00:38:41.630 ] 00:38:41.630 } 00:38:41.630 ] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.630 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:38:41.630 [2024-06-10 11:47:10.469615] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:41.630 [2024-06-10 11:47:10.469657] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427285 ] 00:38:41.630 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.630 [2024-06-10 11:47:10.501341] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:38:41.630 [2024-06-10 11:47:10.501384] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:41.630 [2024-06-10 11:47:10.501390] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:41.630 [2024-06-10 11:47:10.501402] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:41.630 [2024-06-10 11:47:10.501410] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:41.630 [2024-06-10 11:47:10.504708] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:38:41.630 [2024-06-10 11:47:10.504740] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1353ec0 0 00:38:41.630 [2024-06-10 11:47:10.512680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:41.630 [2024-06-10 11:47:10.512692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:41.630 [2024-06-10 11:47:10.512696] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:41.630 [2024-06-10 11:47:10.512699] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:41.630 [2024-06-10 11:47:10.512736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.512741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.512746] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.630 [2024-06-10 11:47:10.512758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:41.630 [2024-06-10 11:47:10.512773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.630 [2024-06-10 11:47:10.520680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.630 [2024-06-10 11:47:10.520691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.630 [2024-06-10 11:47:10.520694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.520699] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.630 [2024-06-10 11:47:10.520709] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:41.630 [2024-06-10 11:47:10.520715] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:38:41.630 [2024-06-10 11:47:10.520720] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:38:41.630 [2024-06-10 11:47:10.520737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.520741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.520745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.630 [2024-06-10 11:47:10.520753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.630 [2024-06-10 11:47:10.520765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.630 [2024-06-10 11:47:10.520999] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.630 [2024-06-10 11:47:10.521006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.630 [2024-06-10 11:47:10.521009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.521013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.630 [2024-06-10 11:47:10.521019] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:38:41.630 [2024-06-10 11:47:10.521027] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:38:41.630 [2024-06-10 11:47:10.521033] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.521037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.630 [2024-06-10 11:47:10.521040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.630 [2024-06-10 11:47:10.521047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.630 [2024-06-10 11:47:10.521057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.630 [2024-06-10 11:47:10.521278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.630 [2024-06-10 11:47:10.521284] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.521288] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.521298] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:38:41.631 [2024-06-10 11:47:10.521305] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.521312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.521326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.631 [2024-06-10 11:47:10.521336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.521551] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.521557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.521560] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521564] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.521570] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.521579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521586] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.521595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.631 [2024-06-10 11:47:10.521605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.521803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.521809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.521812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521816] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.521821] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:38:41.631 [2024-06-10 11:47:10.521826] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.521833] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.521938] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:38:41.631 [2024-06-10 11:47:10.521943] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.521951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521955] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.521958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.521965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.631 [2024-06-10 11:47:10.521974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.522207] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.522213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.522217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.522226] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:41.631 [2024-06-10 11:47:10.522234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.522248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.631 [2024-06-10 11:47:10.522257] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.522476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.522482] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.522486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522489] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.522494] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:41.631 [2024-06-10 11:47:10.522499] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:38:41.631 [2024-06-10 11:47:10.522508] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:38:41.631 [2024-06-10 11:47:10.522516] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:38:41.631 [2024-06-10 11:47:10.522524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.522535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.631 [2024-06-10 11:47:10.522544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.522815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.631 [2024-06-10 11:47:10.522822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.631 [2024-06-10 11:47:10.522825] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522829] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1353ec0): datao=0, datal=4096, cccid=0 00:38:41.631 [2024-06-10 11:47:10.522834] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d6df0) on tqpair(0x1353ec0): expected_datao=0, payload_size=4096 00:38:41.631 [2024-06-10 11:47:10.522838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522846] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522850] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.522999] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.523006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.523009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.523021] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:38:41.631 [2024-06-10 11:47:10.523026] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:38:41.631 [2024-06-10 11:47:10.523030] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:38:41.631 [2024-06-10 11:47:10.523035] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:38:41.631 [2024-06-10 11:47:10.523040] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:38:41.631 [2024-06-10 11:47:10.523045] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:38:41.631 [2024-06-10 11:47:10.523053] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:38:41.631 [2024-06-10 11:47:10.523063] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523067] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523070] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.523077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:41.631 [2024-06-10 11:47:10.523088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.631 [2024-06-10 11:47:10.523313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.631 [2024-06-10 11:47:10.523319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.631 [2024-06-10 11:47:10.523324] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523328] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d6df0) on tqpair=0x1353ec0 00:38:41.631 [2024-06-10 11:47:10.523338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523342] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.523352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.631 [2024-06-10 11:47:10.523358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.523370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.631 [2024-06-10 11:47:10.523377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1353ec0) 00:38:41.631 [2024-06-10 11:47:10.523389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.631 [2024-06-10 11:47:10.523395] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.631 [2024-06-10 11:47:10.523402] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.632 [2024-06-10 11:47:10.523408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.632 [2024-06-10 11:47:10.523412] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:38:41.632 [2024-06-10 11:47:10.523420] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:41.632 [2024-06-10 11:47:10.523426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.523430] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1353ec0) 00:38:41.632 [2024-06-10 11:47:10.523437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.632 [2024-06-10 11:47:10.523448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6df0, cid 0, qid 0 00:38:41.632 [2024-06-10 11:47:10.523453] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d6f50, cid 1, qid 0 00:38:41.632 [2024-06-10 11:47:10.523457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d70b0, cid 2, qid 0 00:38:41.632 [2024-06-10 11:47:10.523462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.632 [2024-06-10 11:47:10.523467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7370, cid 4, qid 0 00:38:41.632 [2024-06-10 11:47:10.523697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.632 [2024-06-10 11:47:10.523704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.632 [2024-06-10 11:47:10.523707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.523711] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7370) on tqpair=0x1353ec0 00:38:41.632 [2024-06-10 11:47:10.523719] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:38:41.632 [2024-06-10 11:47:10.523726] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:38:41.632 [2024-06-10 11:47:10.523736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.523740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1353ec0) 00:38:41.632 [2024-06-10 11:47:10.523746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.632 [2024-06-10 11:47:10.523756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7370, cid 4, qid 0 00:38:41.632 [2024-06-10 11:47:10.523986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.632 [2024-06-10 11:47:10.523992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.632 [2024-06-10 11:47:10.523995] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.523999] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1353ec0): datao=0, datal=4096, cccid=4 00:38:41.632 [2024-06-10 11:47:10.524003] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d7370) on tqpair(0x1353ec0): expected_datao=0, payload_size=4096 00:38:41.632 [2024-06-10 11:47:10.524007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.524034] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.524038] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.568679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.632 [2024-06-10 11:47:10.568691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.632 [2024-06-10 11:47:10.568694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.568698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7370) on tqpair=0x1353ec0 00:38:41.632 [2024-06-10 11:47:10.568712] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:38:41.632 [2024-06-10 11:47:10.568734] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.568738] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1353ec0) 00:38:41.632 [2024-06-10 11:47:10.568746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.632 [2024-06-10 11:47:10.568753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.568757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.568760] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1353ec0) 00:38:41.632 [2024-06-10 11:47:10.568766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.632 [2024-06-10 11:47:10.568785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7370, cid 4, qid 0 00:38:41.632 [2024-06-10 11:47:10.568790] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d74d0, cid 5, qid 0 00:38:41.632 [2024-06-10 11:47:10.569021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.632 [2024-06-10 11:47:10.569028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.632 [2024-06-10 11:47:10.569032] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.569035] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1353ec0): datao=0, datal=1024, cccid=4 00:38:41.632 [2024-06-10 11:47:10.569039] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d7370) on tqpair(0x1353ec0): expected_datao=0, payload_size=1024 00:38:41.632 [2024-06-10 11:47:10.569044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.569050] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.569054] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.569062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.632 [2024-06-10 11:47:10.569068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.632 [2024-06-10 11:47:10.569071] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.632 [2024-06-10 11:47:10.569075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d74d0) on tqpair=0x1353ec0 00:38:41.898 [2024-06-10 11:47:10.610881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.898 [2024-06-10 11:47:10.610893] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.898 [2024-06-10 11:47:10.610897] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.610901] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7370) on tqpair=0x1353ec0 00:38:41.898 [2024-06-10 11:47:10.610916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.610920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1353ec0) 00:38:41.898 [2024-06-10 11:47:10.610928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.898 [2024-06-10 11:47:10.610944] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7370, cid 4, qid 0 00:38:41.898 [2024-06-10 11:47:10.611174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.898 [2024-06-10 11:47:10.611181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.898 [2024-06-10 11:47:10.611184] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611188] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1353ec0): datao=0, datal=3072, cccid=4 00:38:41.898 [2024-06-10 11:47:10.611192] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d7370) on tqpair(0x1353ec0): expected_datao=0, payload_size=3072 00:38:41.898 [2024-06-10 11:47:10.611196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611203] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611207] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.898 [2024-06-10 11:47:10.611329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.898 [2024-06-10 11:47:10.611332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7370) on tqpair=0x1353ec0 00:38:41.898 [2024-06-10 11:47:10.611345] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1353ec0) 00:38:41.898 [2024-06-10 11:47:10.611355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.898 [2024-06-10 11:47:10.611368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7370, cid 4, qid 0 00:38:41.898 [2024-06-10 11:47:10.611620] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.898 [2024-06-10 11:47:10.611627] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.898 [2024-06-10 11:47:10.611631] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611634] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1353ec0): datao=0, datal=8, cccid=4 00:38:41.898 [2024-06-10 11:47:10.611639] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13d7370) on tqpair(0x1353ec0): expected_datao=0, payload_size=8 00:38:41.898 [2024-06-10 11:47:10.611643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611649] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.611653] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.655680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.898 [2024-06-10 11:47:10.655693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.898 [2024-06-10 11:47:10.655697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.898 [2024-06-10 11:47:10.655701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7370) on tqpair=0x1353ec0 00:38:41.898 ===================================================== 00:38:41.898 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:41.898 ===================================================== 00:38:41.898 Controller Capabilities/Features 00:38:41.898 ================================ 00:38:41.898 Vendor ID: 0000 00:38:41.898 Subsystem Vendor ID: 0000 00:38:41.898 Serial Number: .................... 00:38:41.898 Model Number: ........................................ 00:38:41.898 Firmware Version: 24.09 00:38:41.898 Recommended Arb Burst: 0 00:38:41.898 IEEE OUI Identifier: 00 00 00 00:38:41.898 Multi-path I/O 00:38:41.898 May have multiple subsystem ports: No 00:38:41.898 May have multiple controllers: No 00:38:41.898 Associated with SR-IOV VF: No 00:38:41.898 Max Data Transfer Size: 131072 00:38:41.898 Max Number of Namespaces: 0 00:38:41.898 Max Number of I/O Queues: 1024 00:38:41.898 NVMe Specification Version (VS): 1.3 00:38:41.898 NVMe Specification Version (Identify): 1.3 00:38:41.898 Maximum Queue Entries: 128 00:38:41.898 Contiguous Queues Required: Yes 00:38:41.898 Arbitration Mechanisms Supported 00:38:41.898 Weighted Round Robin: Not Supported 00:38:41.898 Vendor Specific: Not Supported 00:38:41.898 Reset Timeout: 15000 ms 00:38:41.898 Doorbell Stride: 4 bytes 00:38:41.898 NVM Subsystem Reset: Not Supported 00:38:41.898 Command Sets Supported 00:38:41.898 NVM Command Set: Supported 00:38:41.898 Boot Partition: Not Supported 00:38:41.898 Memory Page Size Minimum: 4096 bytes 00:38:41.898 Memory Page Size Maximum: 4096 bytes 00:38:41.898 Persistent Memory Region: Not Supported 00:38:41.898 Optional Asynchronous Events Supported 00:38:41.898 Namespace Attribute Notices: Not Supported 00:38:41.898 Firmware Activation Notices: Not Supported 00:38:41.898 ANA Change Notices: Not Supported 00:38:41.898 PLE Aggregate Log Change Notices: Not Supported 00:38:41.898 LBA Status Info Alert Notices: Not Supported 00:38:41.898 EGE Aggregate Log Change Notices: Not Supported 00:38:41.898 Normal NVM Subsystem Shutdown event: Not Supported 00:38:41.898 Zone Descriptor Change Notices: Not Supported 00:38:41.898 Discovery Log Change Notices: Supported 00:38:41.898 Controller Attributes 00:38:41.898 128-bit Host Identifier: Not Supported 00:38:41.898 Non-Operational Permissive Mode: Not Supported 00:38:41.898 NVM Sets: Not Supported 00:38:41.898 Read Recovery Levels: Not Supported 00:38:41.898 Endurance Groups: Not Supported 00:38:41.898 Predictable Latency Mode: Not Supported 00:38:41.898 Traffic Based Keep ALive: Not Supported 00:38:41.898 Namespace Granularity: Not Supported 00:38:41.898 SQ Associations: Not Supported 00:38:41.898 UUID List: Not Supported 00:38:41.898 Multi-Domain Subsystem: Not Supported 00:38:41.898 Fixed Capacity Management: Not Supported 00:38:41.898 Variable Capacity Management: Not Supported 00:38:41.898 Delete Endurance Group: Not Supported 00:38:41.898 Delete NVM Set: Not Supported 00:38:41.898 Extended LBA Formats Supported: Not Supported 00:38:41.898 Flexible Data Placement Supported: Not Supported 00:38:41.898 00:38:41.898 Controller Memory Buffer Support 00:38:41.898 ================================ 00:38:41.898 Supported: No 00:38:41.898 00:38:41.898 Persistent Memory Region Support 00:38:41.898 ================================ 00:38:41.898 Supported: No 00:38:41.898 00:38:41.898 Admin Command Set Attributes 00:38:41.898 ============================ 00:38:41.898 Security Send/Receive: Not Supported 00:38:41.898 Format NVM: Not Supported 00:38:41.898 Firmware Activate/Download: Not Supported 00:38:41.898 Namespace Management: Not Supported 00:38:41.898 Device Self-Test: Not Supported 00:38:41.898 Directives: Not Supported 00:38:41.898 NVMe-MI: Not Supported 00:38:41.898 Virtualization Management: Not Supported 00:38:41.898 Doorbell Buffer Config: Not Supported 00:38:41.898 Get LBA Status Capability: Not Supported 00:38:41.898 Command & Feature Lockdown Capability: Not Supported 00:38:41.898 Abort Command Limit: 1 00:38:41.898 Async Event Request Limit: 4 00:38:41.898 Number of Firmware Slots: N/A 00:38:41.898 Firmware Slot 1 Read-Only: N/A 00:38:41.898 Firmware Activation Without Reset: N/A 00:38:41.898 Multiple Update Detection Support: N/A 00:38:41.898 Firmware Update Granularity: No Information Provided 00:38:41.898 Per-Namespace SMART Log: No 00:38:41.898 Asymmetric Namespace Access Log Page: Not Supported 00:38:41.898 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:41.898 Command Effects Log Page: Not Supported 00:38:41.898 Get Log Page Extended Data: Supported 00:38:41.898 Telemetry Log Pages: Not Supported 00:38:41.898 Persistent Event Log Pages: Not Supported 00:38:41.899 Supported Log Pages Log Page: May Support 00:38:41.899 Commands Supported & Effects Log Page: Not Supported 00:38:41.899 Feature Identifiers & Effects Log Page:May Support 00:38:41.899 NVMe-MI Commands & Effects Log Page: May Support 00:38:41.899 Data Area 4 for Telemetry Log: Not Supported 00:38:41.899 Error Log Page Entries Supported: 128 00:38:41.899 Keep Alive: Not Supported 00:38:41.899 00:38:41.899 NVM Command Set Attributes 00:38:41.899 ========================== 00:38:41.899 Submission Queue Entry Size 00:38:41.899 Max: 1 00:38:41.899 Min: 1 00:38:41.899 Completion Queue Entry Size 00:38:41.899 Max: 1 00:38:41.899 Min: 1 00:38:41.899 Number of Namespaces: 0 00:38:41.899 Compare Command: Not Supported 00:38:41.899 Write Uncorrectable Command: Not Supported 00:38:41.899 Dataset Management Command: Not Supported 00:38:41.899 Write Zeroes Command: Not Supported 00:38:41.899 Set Features Save Field: Not Supported 00:38:41.899 Reservations: Not Supported 00:38:41.899 Timestamp: Not Supported 00:38:41.899 Copy: Not Supported 00:38:41.899 Volatile Write Cache: Not Present 00:38:41.899 Atomic Write Unit (Normal): 1 00:38:41.899 Atomic Write Unit (PFail): 1 00:38:41.899 Atomic Compare & Write Unit: 1 00:38:41.899 Fused Compare & Write: Supported 00:38:41.899 Scatter-Gather List 00:38:41.899 SGL Command Set: Supported 00:38:41.899 SGL Keyed: Supported 00:38:41.899 SGL Bit Bucket Descriptor: Not Supported 00:38:41.899 SGL Metadata Pointer: Not Supported 00:38:41.899 Oversized SGL: Not Supported 00:38:41.899 SGL Metadata Address: Not Supported 00:38:41.899 SGL Offset: Supported 00:38:41.899 Transport SGL Data Block: Not Supported 00:38:41.899 Replay Protected Memory Block: Not Supported 00:38:41.899 00:38:41.899 Firmware Slot Information 00:38:41.899 ========================= 00:38:41.899 Active slot: 0 00:38:41.899 00:38:41.899 00:38:41.899 Error Log 00:38:41.899 ========= 00:38:41.899 00:38:41.899 Active Namespaces 00:38:41.899 ================= 00:38:41.899 Discovery Log Page 00:38:41.899 ================== 00:38:41.899 Generation Counter: 2 00:38:41.899 Number of Records: 2 00:38:41.899 Record Format: 0 00:38:41.899 00:38:41.899 Discovery Log Entry 0 00:38:41.899 ---------------------- 00:38:41.899 Transport Type: 3 (TCP) 00:38:41.899 Address Family: 1 (IPv4) 00:38:41.899 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:41.899 Entry Flags: 00:38:41.899 Duplicate Returned Information: 1 00:38:41.899 Explicit Persistent Connection Support for Discovery: 1 00:38:41.899 Transport Requirements: 00:38:41.899 Secure Channel: Not Required 00:38:41.899 Port ID: 0 (0x0000) 00:38:41.899 Controller ID: 65535 (0xffff) 00:38:41.899 Admin Max SQ Size: 128 00:38:41.899 Transport Service Identifier: 4420 00:38:41.899 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:41.899 Transport Address: 10.0.0.2 00:38:41.899 Discovery Log Entry 1 00:38:41.899 ---------------------- 00:38:41.899 Transport Type: 3 (TCP) 00:38:41.899 Address Family: 1 (IPv4) 00:38:41.899 Subsystem Type: 2 (NVM Subsystem) 00:38:41.899 Entry Flags: 00:38:41.899 Duplicate Returned Information: 0 00:38:41.899 Explicit Persistent Connection Support for Discovery: 0 00:38:41.899 Transport Requirements: 00:38:41.899 Secure Channel: Not Required 00:38:41.899 Port ID: 0 (0x0000) 00:38:41.899 Controller ID: 65535 (0xffff) 00:38:41.899 Admin Max SQ Size: 128 00:38:41.899 Transport Service Identifier: 4420 00:38:41.899 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:38:41.899 Transport Address: 10.0.0.2 [2024-06-10 11:47:10.655788] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:38:41.899 [2024-06-10 11:47:10.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.899 [2024-06-10 11:47:10.655809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.899 [2024-06-10 11:47:10.655815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.899 [2024-06-10 11:47:10.655821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.899 [2024-06-10 11:47:10.655830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.655834] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.655838] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.899 [2024-06-10 11:47:10.655845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.899 [2024-06-10 11:47:10.655858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.899 [2024-06-10 11:47:10.656144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.899 [2024-06-10 11:47:10.656150] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.899 [2024-06-10 11:47:10.656154] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656157] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.899 [2024-06-10 11:47:10.656165] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.899 [2024-06-10 11:47:10.656179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.899 [2024-06-10 11:47:10.656192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.899 [2024-06-10 11:47:10.656424] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.899 [2024-06-10 11:47:10.656430] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.899 [2024-06-10 11:47:10.656434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.899 [2024-06-10 11:47:10.656443] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:38:41.899 [2024-06-10 11:47:10.656450] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:38:41.899 [2024-06-10 11:47:10.656460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.899 [2024-06-10 11:47:10.656474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.899 [2024-06-10 11:47:10.656483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.899 [2024-06-10 11:47:10.656743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.899 [2024-06-10 11:47:10.656752] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.899 [2024-06-10 11:47:10.656755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.899 [2024-06-10 11:47:10.656770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.656777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.899 [2024-06-10 11:47:10.656784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.899 [2024-06-10 11:47:10.656794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.899 [2024-06-10 11:47:10.657046] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.899 [2024-06-10 11:47:10.657052] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.899 [2024-06-10 11:47:10.657056] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.657059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.899 [2024-06-10 11:47:10.657069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.657073] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.899 [2024-06-10 11:47:10.657077] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.657083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.657093] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.657298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.657304] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.657307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657310] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.657321] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657328] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.657335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.657344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.657565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.657571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.657575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657579] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.657589] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.657604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.657613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.657801] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.657808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.657814] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657817] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.657828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.657835] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.657842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.657852] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.658053] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.658059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.658063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.658077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658081] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.658091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.658100] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.658358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.658364] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.658368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658371] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.658381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658389] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.658395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.658405] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.658621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.658628] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.658631] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.658645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.658659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.658673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.658912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.658918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.658922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.658940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658944] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.658948] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.658954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.658964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.659212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.659219] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.659222] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.659236] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659244] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.659250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.659260] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.659464] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.659471] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.659474] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659478] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.659488] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.659496] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.659502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.659512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.663681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.663690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.663693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.663697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.663707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.663711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.663714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1353ec0) 00:38:41.900 [2024-06-10 11:47:10.663721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.900 [2024-06-10 11:47:10.663732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13d7210, cid 3, qid 0 00:38:41.900 [2024-06-10 11:47:10.663927] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.900 [2024-06-10 11:47:10.663933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.900 [2024-06-10 11:47:10.663937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.900 [2024-06-10 11:47:10.663941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13d7210) on tqpair=0x1353ec0 00:38:41.900 [2024-06-10 11:47:10.663951] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:38:41.900 00:38:41.900 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:38:41.900 [2024-06-10 11:47:10.701533] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:41.900 [2024-06-10 11:47:10.701578] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427389 ] 00:38:41.900 EAL: No free 2048 kB hugepages reported on node 1 00:38:41.900 [2024-06-10 11:47:10.734220] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:38:41.900 [2024-06-10 11:47:10.734261] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:38:41.900 [2024-06-10 11:47:10.734266] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:38:41.900 [2024-06-10 11:47:10.734278] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:38:41.901 [2024-06-10 11:47:10.734285] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:38:41.901 [2024-06-10 11:47:10.737704] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:38:41.901 [2024-06-10 11:47:10.737728] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fa1ec0 0 00:38:41.901 [2024-06-10 11:47:10.745679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:38:41.901 [2024-06-10 11:47:10.745688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:38:41.901 [2024-06-10 11:47:10.745693] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:38:41.901 [2024-06-10 11:47:10.745696] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:38:41.901 [2024-06-10 11:47:10.745726] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.745732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.745736] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.745748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:41.901 [2024-06-10 11:47:10.745763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.753679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.753688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.753691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.753708] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:38:41.901 [2024-06-10 11:47:10.753715] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:38:41.901 [2024-06-10 11:47:10.753720] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:38:41.901 [2024-06-10 11:47:10.753732] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753739] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.753751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.753763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.753948] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.753955] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.753959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753962] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.753968] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:38:41.901 [2024-06-10 11:47:10.753976] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:38:41.901 [2024-06-10 11:47:10.753982] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753986] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.753989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.753996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.754006] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.754208] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.754215] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.754218] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.754228] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:38:41.901 [2024-06-10 11:47:10.754235] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.754241] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754245] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.754255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.754265] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.754471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.754477] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.754480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.754489] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.754498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754505] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.754512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.754522] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.754712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.754719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.754722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.754731] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:38:41.901 [2024-06-10 11:47:10.754735] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.754742] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.754848] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:38:41.901 [2024-06-10 11:47:10.754852] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.754859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.754866] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.754873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.754883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.755074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.755080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.755083] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.755092] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:41.901 [2024-06-10 11:47:10.755101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.755115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.755124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.755303] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.755309] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.901 [2024-06-10 11:47:10.755313] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.901 [2024-06-10 11:47:10.755321] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:41.901 [2024-06-10 11:47:10.755326] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:38:41.901 [2024-06-10 11:47:10.755333] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:38:41.901 [2024-06-10 11:47:10.755341] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:38:41.901 [2024-06-10 11:47:10.755350] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755354] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.901 [2024-06-10 11:47:10.755361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.901 [2024-06-10 11:47:10.755371] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.901 [2024-06-10 11:47:10.755599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.901 [2024-06-10 11:47:10.755605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.901 [2024-06-10 11:47:10.755609] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755613] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=4096, cccid=0 00:38:41.901 [2024-06-10 11:47:10.755617] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2024df0) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=4096 00:38:41.901 [2024-06-10 11:47:10.755621] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755701] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.755705] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.901 [2024-06-10 11:47:10.796821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.901 [2024-06-10 11:47:10.796831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.902 [2024-06-10 11:47:10.796834] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.796838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.902 [2024-06-10 11:47:10.796846] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:38:41.902 [2024-06-10 11:47:10.796851] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:38:41.902 [2024-06-10 11:47:10.796855] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:38:41.902 [2024-06-10 11:47:10.796859] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:38:41.902 [2024-06-10 11:47:10.796863] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:38:41.902 [2024-06-10 11:47:10.796868] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.796876] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.796886] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.796890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.796893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.796901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:41.902 [2024-06-10 11:47:10.796912] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.902 [2024-06-10 11:47:10.797084] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.902 [2024-06-10 11:47:10.797090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.902 [2024-06-10 11:47:10.797094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2024df0) on tqpair=0x1fa1ec0 00:38:41.902 [2024-06-10 11:47:10.797107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.902 [2024-06-10 11:47:10.797128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797132] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.902 [2024-06-10 11:47:10.797147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.902 [2024-06-10 11:47:10.797166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.902 [2024-06-10 11:47:10.797183] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.797190] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.797197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.902 [2024-06-10 11:47:10.797218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024df0, cid 0, qid 0 00:38:41.902 [2024-06-10 11:47:10.797223] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2024f50, cid 1, qid 0 00:38:41.902 [2024-06-10 11:47:10.797228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20250b0, cid 2, qid 0 00:38:41.902 [2024-06-10 11:47:10.797232] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025210, cid 3, qid 0 00:38:41.902 [2024-06-10 11:47:10.797237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.902 [2024-06-10 11:47:10.797429] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.902 [2024-06-10 11:47:10.797435] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.902 [2024-06-10 11:47:10.797438] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.902 [2024-06-10 11:47:10.797450] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:38:41.902 [2024-06-10 11:47:10.797455] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.797462] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.797468] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.797476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797480] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.797483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.797490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:41.902 [2024-06-10 11:47:10.797500] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.902 [2024-06-10 11:47:10.801677] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.902 [2024-06-10 11:47:10.801684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.902 [2024-06-10 11:47:10.801688] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.801691] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.902 [2024-06-10 11:47:10.801747] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.801756] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:38:41.902 [2024-06-10 11:47:10.801763] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.801767] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.902 [2024-06-10 11:47:10.801773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.902 [2024-06-10 11:47:10.801784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.902 [2024-06-10 11:47:10.801975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.902 [2024-06-10 11:47:10.801981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.902 [2024-06-10 11:47:10.801985] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.801989] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=4096, cccid=4 00:38:41.902 [2024-06-10 11:47:10.801993] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025370) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=4096 00:38:41.902 [2024-06-10 11:47:10.801997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.802004] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.802007] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.802213] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.902 [2024-06-10 11:47:10.802219] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.902 [2024-06-10 11:47:10.802223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.902 [2024-06-10 11:47:10.802226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.902 [2024-06-10 11:47:10.802235] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:38:41.903 [2024-06-10 11:47:10.802248] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.802257] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.802263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.802273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.802283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.903 [2024-06-10 11:47:10.802458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.903 [2024-06-10 11:47:10.802464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.903 [2024-06-10 11:47:10.802468] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802471] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=4096, cccid=4 00:38:41.903 [2024-06-10 11:47:10.802476] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025370) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=4096 00:38:41.903 [2024-06-10 11:47:10.802480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802486] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802490] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.802636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.802640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.802655] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.802664] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.802675] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802679] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.802686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.802696] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.903 [2024-06-10 11:47:10.802878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.903 [2024-06-10 11:47:10.802884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.903 [2024-06-10 11:47:10.802888] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802891] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=4096, cccid=4 00:38:41.903 [2024-06-10 11:47:10.802896] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025370) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=4096 00:38:41.903 [2024-06-10 11:47:10.802900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802906] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.802910] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.803127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.803131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.803142] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803150] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803157] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803163] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803170] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803175] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:38:41.903 [2024-06-10 11:47:10.803179] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:38:41.903 [2024-06-10 11:47:10.803184] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:38:41.903 [2024-06-10 11:47:10.803198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.803208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.803214] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803219] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803222] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.803228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.903 [2024-06-10 11:47:10.803241] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.903 [2024-06-10 11:47:10.803246] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20254d0, cid 5, qid 0 00:38:41.903 [2024-06-10 11:47:10.803454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.803460] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.803463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.803474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.803480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.803483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20254d0) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.803496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.803506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.803516] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20254d0, cid 5, qid 0 00:38:41.903 [2024-06-10 11:47:10.803681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.803687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.803691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803694] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20254d0) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.803704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.803714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.803723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20254d0, cid 5, qid 0 00:38:41.903 [2024-06-10 11:47:10.803953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.803961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.803964] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803968] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20254d0) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.803978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.803981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.803988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.803997] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20254d0, cid 5, qid 0 00:38:41.903 [2024-06-10 11:47:10.804190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.903 [2024-06-10 11:47:10.804196] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.903 [2024-06-10 11:47:10.804199] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.804203] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20254d0) on tqpair=0x1fa1ec0 00:38:41.903 [2024-06-10 11:47:10.804217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.804221] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.804227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.804235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.804238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.804244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.804251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.804255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.804261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.804268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.903 [2024-06-10 11:47:10.804272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fa1ec0) 00:38:41.903 [2024-06-10 11:47:10.804278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.903 [2024-06-10 11:47:10.804288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20254d0, cid 5, qid 0 00:38:41.903 [2024-06-10 11:47:10.804294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025370, cid 4, qid 0 00:38:41.903 [2024-06-10 11:47:10.804298] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025630, cid 6, qid 0 00:38:41.904 [2024-06-10 11:47:10.804303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025790, cid 7, qid 0 00:38:41.904 [2024-06-10 11:47:10.804527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.904 [2024-06-10 11:47:10.804534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.904 [2024-06-10 11:47:10.804537] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804540] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=8192, cccid=5 00:38:41.904 [2024-06-10 11:47:10.804545] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20254d0) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=8192 00:38:41.904 [2024-06-10 11:47:10.804549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804639] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804644] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.904 [2024-06-10 11:47:10.804655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.904 [2024-06-10 11:47:10.804658] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804662] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=512, cccid=4 00:38:41.904 [2024-06-10 11:47:10.804666] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025370) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=512 00:38:41.904 [2024-06-10 11:47:10.804674] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804681] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804684] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.904 [2024-06-10 11:47:10.804695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.904 [2024-06-10 11:47:10.804699] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804702] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=512, cccid=6 00:38:41.904 [2024-06-10 11:47:10.804706] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025630) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=512 00:38:41.904 [2024-06-10 11:47:10.804710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804717] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804720] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804726] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:38:41.904 [2024-06-10 11:47:10.804731] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:38:41.904 [2024-06-10 11:47:10.804734] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804738] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fa1ec0): datao=0, datal=4096, cccid=7 00:38:41.904 [2024-06-10 11:47:10.804742] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2025790) on tqpair(0x1fa1ec0): expected_datao=0, payload_size=4096 00:38:41.904 [2024-06-10 11:47:10.804746] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804758] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804762] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.904 [2024-06-10 11:47:10.804815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.904 [2024-06-10 11:47:10.804818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20254d0) on tqpair=0x1fa1ec0 00:38:41.904 [2024-06-10 11:47:10.804834] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.904 [2024-06-10 11:47:10.804840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.904 [2024-06-10 11:47:10.804843] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804847] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025370) on tqpair=0x1fa1ec0 00:38:41.904 [2024-06-10 11:47:10.804857] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.904 [2024-06-10 11:47:10.804863] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.904 [2024-06-10 11:47:10.804866] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804870] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025630) on tqpair=0x1fa1ec0 00:38:41.904 [2024-06-10 11:47:10.804878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.904 [2024-06-10 11:47:10.804885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.904 [2024-06-10 11:47:10.804888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.904 [2024-06-10 11:47:10.804892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025790) on tqpair=0x1fa1ec0 00:38:41.904 ===================================================== 00:38:41.904 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:41.904 ===================================================== 00:38:41.904 Controller Capabilities/Features 00:38:41.904 ================================ 00:38:41.904 Vendor ID: 8086 00:38:41.904 Subsystem Vendor ID: 8086 00:38:41.904 Serial Number: SPDK00000000000001 00:38:41.904 Model Number: SPDK bdev Controller 00:38:41.904 Firmware Version: 24.09 00:38:41.904 Recommended Arb Burst: 6 00:38:41.904 IEEE OUI Identifier: e4 d2 5c 00:38:41.904 Multi-path I/O 00:38:41.904 May have multiple subsystem ports: Yes 00:38:41.904 May have multiple controllers: Yes 00:38:41.904 Associated with SR-IOV VF: No 00:38:41.904 Max Data Transfer Size: 131072 00:38:41.904 Max Number of Namespaces: 32 00:38:41.904 Max Number of I/O Queues: 127 00:38:41.904 NVMe Specification Version (VS): 1.3 00:38:41.904 NVMe Specification Version (Identify): 1.3 00:38:41.904 Maximum Queue Entries: 128 00:38:41.904 Contiguous Queues Required: Yes 00:38:41.904 Arbitration Mechanisms Supported 00:38:41.904 Weighted Round Robin: Not Supported 00:38:41.904 Vendor Specific: Not Supported 00:38:41.904 Reset Timeout: 15000 ms 00:38:41.904 Doorbell Stride: 4 bytes 00:38:41.904 NVM Subsystem Reset: Not Supported 00:38:41.904 Command Sets Supported 00:38:41.904 NVM Command Set: Supported 00:38:41.904 Boot Partition: Not Supported 00:38:41.904 Memory Page Size Minimum: 4096 bytes 00:38:41.904 Memory Page Size Maximum: 4096 bytes 00:38:41.904 Persistent Memory Region: Not Supported 00:38:41.904 Optional Asynchronous Events Supported 00:38:41.904 Namespace Attribute Notices: Supported 00:38:41.904 Firmware Activation Notices: Not Supported 00:38:41.904 ANA Change Notices: Not Supported 00:38:41.904 PLE Aggregate Log Change Notices: Not Supported 00:38:41.904 LBA Status Info Alert Notices: Not Supported 00:38:41.904 EGE Aggregate Log Change Notices: Not Supported 00:38:41.904 Normal NVM Subsystem Shutdown event: Not Supported 00:38:41.904 Zone Descriptor Change Notices: Not Supported 00:38:41.904 Discovery Log Change Notices: Not Supported 00:38:41.904 Controller Attributes 00:38:41.904 128-bit Host Identifier: Supported 00:38:41.904 Non-Operational Permissive Mode: Not Supported 00:38:41.904 NVM Sets: Not Supported 00:38:41.904 Read Recovery Levels: Not Supported 00:38:41.904 Endurance Groups: Not Supported 00:38:41.904 Predictable Latency Mode: Not Supported 00:38:41.904 Traffic Based Keep ALive: Not Supported 00:38:41.904 Namespace Granularity: Not Supported 00:38:41.904 SQ Associations: Not Supported 00:38:41.904 UUID List: Not Supported 00:38:41.904 Multi-Domain Subsystem: Not Supported 00:38:41.904 Fixed Capacity Management: Not Supported 00:38:41.904 Variable Capacity Management: Not Supported 00:38:41.904 Delete Endurance Group: Not Supported 00:38:41.904 Delete NVM Set: Not Supported 00:38:41.904 Extended LBA Formats Supported: Not Supported 00:38:41.904 Flexible Data Placement Supported: Not Supported 00:38:41.904 00:38:41.904 Controller Memory Buffer Support 00:38:41.904 ================================ 00:38:41.904 Supported: No 00:38:41.904 00:38:41.904 Persistent Memory Region Support 00:38:41.904 ================================ 00:38:41.904 Supported: No 00:38:41.904 00:38:41.904 Admin Command Set Attributes 00:38:41.904 ============================ 00:38:41.904 Security Send/Receive: Not Supported 00:38:41.904 Format NVM: Not Supported 00:38:41.904 Firmware Activate/Download: Not Supported 00:38:41.904 Namespace Management: Not Supported 00:38:41.904 Device Self-Test: Not Supported 00:38:41.904 Directives: Not Supported 00:38:41.904 NVMe-MI: Not Supported 00:38:41.904 Virtualization Management: Not Supported 00:38:41.904 Doorbell Buffer Config: Not Supported 00:38:41.904 Get LBA Status Capability: Not Supported 00:38:41.904 Command & Feature Lockdown Capability: Not Supported 00:38:41.904 Abort Command Limit: 4 00:38:41.904 Async Event Request Limit: 4 00:38:41.904 Number of Firmware Slots: N/A 00:38:41.904 Firmware Slot 1 Read-Only: N/A 00:38:41.904 Firmware Activation Without Reset: N/A 00:38:41.904 Multiple Update Detection Support: N/A 00:38:41.904 Firmware Update Granularity: No Information Provided 00:38:41.904 Per-Namespace SMART Log: No 00:38:41.904 Asymmetric Namespace Access Log Page: Not Supported 00:38:41.904 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:38:41.904 Command Effects Log Page: Supported 00:38:41.904 Get Log Page Extended Data: Supported 00:38:41.904 Telemetry Log Pages: Not Supported 00:38:41.904 Persistent Event Log Pages: Not Supported 00:38:41.904 Supported Log Pages Log Page: May Support 00:38:41.904 Commands Supported & Effects Log Page: Not Supported 00:38:41.904 Feature Identifiers & Effects Log Page:May Support 00:38:41.904 NVMe-MI Commands & Effects Log Page: May Support 00:38:41.904 Data Area 4 for Telemetry Log: Not Supported 00:38:41.904 Error Log Page Entries Supported: 128 00:38:41.904 Keep Alive: Supported 00:38:41.904 Keep Alive Granularity: 10000 ms 00:38:41.904 00:38:41.904 NVM Command Set Attributes 00:38:41.904 ========================== 00:38:41.905 Submission Queue Entry Size 00:38:41.905 Max: 64 00:38:41.905 Min: 64 00:38:41.905 Completion Queue Entry Size 00:38:41.905 Max: 16 00:38:41.905 Min: 16 00:38:41.905 Number of Namespaces: 32 00:38:41.905 Compare Command: Supported 00:38:41.905 Write Uncorrectable Command: Not Supported 00:38:41.905 Dataset Management Command: Supported 00:38:41.905 Write Zeroes Command: Supported 00:38:41.905 Set Features Save Field: Not Supported 00:38:41.905 Reservations: Supported 00:38:41.905 Timestamp: Not Supported 00:38:41.905 Copy: Supported 00:38:41.905 Volatile Write Cache: Present 00:38:41.905 Atomic Write Unit (Normal): 1 00:38:41.905 Atomic Write Unit (PFail): 1 00:38:41.905 Atomic Compare & Write Unit: 1 00:38:41.905 Fused Compare & Write: Supported 00:38:41.905 Scatter-Gather List 00:38:41.905 SGL Command Set: Supported 00:38:41.905 SGL Keyed: Supported 00:38:41.905 SGL Bit Bucket Descriptor: Not Supported 00:38:41.905 SGL Metadata Pointer: Not Supported 00:38:41.905 Oversized SGL: Not Supported 00:38:41.905 SGL Metadata Address: Not Supported 00:38:41.905 SGL Offset: Supported 00:38:41.905 Transport SGL Data Block: Not Supported 00:38:41.905 Replay Protected Memory Block: Not Supported 00:38:41.905 00:38:41.905 Firmware Slot Information 00:38:41.905 ========================= 00:38:41.905 Active slot: 1 00:38:41.905 Slot 1 Firmware Revision: 24.09 00:38:41.905 00:38:41.905 00:38:41.905 Commands Supported and Effects 00:38:41.905 ============================== 00:38:41.905 Admin Commands 00:38:41.905 -------------- 00:38:41.905 Get Log Page (02h): Supported 00:38:41.905 Identify (06h): Supported 00:38:41.905 Abort (08h): Supported 00:38:41.905 Set Features (09h): Supported 00:38:41.905 Get Features (0Ah): Supported 00:38:41.905 Asynchronous Event Request (0Ch): Supported 00:38:41.905 Keep Alive (18h): Supported 00:38:41.905 I/O Commands 00:38:41.905 ------------ 00:38:41.905 Flush (00h): Supported LBA-Change 00:38:41.905 Write (01h): Supported LBA-Change 00:38:41.905 Read (02h): Supported 00:38:41.905 Compare (05h): Supported 00:38:41.905 Write Zeroes (08h): Supported LBA-Change 00:38:41.905 Dataset Management (09h): Supported LBA-Change 00:38:41.905 Copy (19h): Supported LBA-Change 00:38:41.905 Unknown (79h): Supported LBA-Change 00:38:41.905 Unknown (7Ah): Supported 00:38:41.905 00:38:41.905 Error Log 00:38:41.905 ========= 00:38:41.905 00:38:41.905 Arbitration 00:38:41.905 =========== 00:38:41.905 Arbitration Burst: 1 00:38:41.905 00:38:41.905 Power Management 00:38:41.905 ================ 00:38:41.905 Number of Power States: 1 00:38:41.905 Current Power State: Power State #0 00:38:41.905 Power State #0: 00:38:41.905 Max Power: 0.00 W 00:38:41.905 Non-Operational State: Operational 00:38:41.905 Entry Latency: Not Reported 00:38:41.905 Exit Latency: Not Reported 00:38:41.905 Relative Read Throughput: 0 00:38:41.905 Relative Read Latency: 0 00:38:41.905 Relative Write Throughput: 0 00:38:41.905 Relative Write Latency: 0 00:38:41.905 Idle Power: Not Reported 00:38:41.905 Active Power: Not Reported 00:38:41.905 Non-Operational Permissive Mode: Not Supported 00:38:41.905 00:38:41.905 Health Information 00:38:41.905 ================== 00:38:41.905 Critical Warnings: 00:38:41.905 Available Spare Space: OK 00:38:41.905 Temperature: OK 00:38:41.905 Device Reliability: OK 00:38:41.905 Read Only: No 00:38:41.905 Volatile Memory Backup: OK 00:38:41.905 Current Temperature: 0 Kelvin (-273 Celsius) 00:38:41.905 Temperature Threshold: [2024-06-10 11:47:10.804990] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.804996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fa1ec0) 00:38:41.905 [2024-06-10 11:47:10.805002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.905 [2024-06-10 11:47:10.805013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025790, cid 7, qid 0 00:38:41.905 [2024-06-10 11:47:10.805198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.905 [2024-06-10 11:47:10.805204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.905 [2024-06-10 11:47:10.805208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805211] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025790) on tqpair=0x1fa1ec0 00:38:41.905 [2024-06-10 11:47:10.805242] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:38:41.905 [2024-06-10 11:47:10.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.905 [2024-06-10 11:47:10.805259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.905 [2024-06-10 11:47:10.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.905 [2024-06-10 11:47:10.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.905 [2024-06-10 11:47:10.805278] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805285] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1ec0) 00:38:41.905 [2024-06-10 11:47:10.805292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.905 [2024-06-10 11:47:10.805303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025210, cid 3, qid 0 00:38:41.905 [2024-06-10 11:47:10.805472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.905 [2024-06-10 11:47:10.805478] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.905 [2024-06-10 11:47:10.805482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025210) on tqpair=0x1fa1ec0 00:38:41.905 [2024-06-10 11:47:10.805492] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.805499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1ec0) 00:38:41.905 [2024-06-10 11:47:10.805506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.905 [2024-06-10 11:47:10.805519] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025210, cid 3, qid 0 00:38:41.905 [2024-06-10 11:47:10.809676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.905 [2024-06-10 11:47:10.809685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.905 [2024-06-10 11:47:10.809688] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.809692] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025210) on tqpair=0x1fa1ec0 00:38:41.905 [2024-06-10 11:47:10.809700] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:38:41.905 [2024-06-10 11:47:10.809704] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:38:41.905 [2024-06-10 11:47:10.809714] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.809718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.809721] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fa1ec0) 00:38:41.905 [2024-06-10 11:47:10.809728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.905 [2024-06-10 11:47:10.809739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2025210, cid 3, qid 0 00:38:41.905 [2024-06-10 11:47:10.809925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:38:41.905 [2024-06-10 11:47:10.809931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:38:41.905 [2024-06-10 11:47:10.809934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:38:41.905 [2024-06-10 11:47:10.809938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2025210) on tqpair=0x1fa1ec0 00:38:41.905 [2024-06-10 11:47:10.809946] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:38:41.905 0 Kelvin (-273 Celsius) 00:38:41.905 Available Spare: 0% 00:38:41.905 Available Spare Threshold: 0% 00:38:41.905 Life Percentage Used: 0% 00:38:41.905 Data Units Read: 0 00:38:41.905 Data Units Written: 0 00:38:41.905 Host Read Commands: 0 00:38:41.905 Host Write Commands: 0 00:38:41.905 Controller Busy Time: 0 minutes 00:38:41.905 Power Cycles: 0 00:38:41.905 Power On Hours: 0 hours 00:38:41.905 Unsafe Shutdowns: 0 00:38:41.905 Unrecoverable Media Errors: 0 00:38:41.905 Lifetime Error Log Entries: 0 00:38:41.905 Warning Temperature Time: 0 minutes 00:38:41.905 Critical Temperature Time: 0 minutes 00:38:41.905 00:38:41.905 Number of Queues 00:38:41.905 ================ 00:38:41.905 Number of I/O Submission Queues: 127 00:38:41.905 Number of I/O Completion Queues: 127 00:38:41.905 00:38:41.905 Active Namespaces 00:38:41.905 ================= 00:38:41.905 Namespace ID:1 00:38:41.905 Error Recovery Timeout: Unlimited 00:38:41.905 Command Set Identifier: NVM (00h) 00:38:41.905 Deallocate: Supported 00:38:41.905 Deallocated/Unwritten Error: Not Supported 00:38:41.905 Deallocated Read Value: Unknown 00:38:41.905 Deallocate in Write Zeroes: Not Supported 00:38:41.905 Deallocated Guard Field: 0xFFFF 00:38:41.906 Flush: Supported 00:38:41.906 Reservation: Supported 00:38:41.906 Namespace Sharing Capabilities: Multiple Controllers 00:38:41.906 Size (in LBAs): 131072 (0GiB) 00:38:41.906 Capacity (in LBAs): 131072 (0GiB) 00:38:41.906 Utilization (in LBAs): 131072 (0GiB) 00:38:41.906 NGUID: ABCDEF0123456789ABCDEF0123456789 00:38:41.906 EUI64: ABCDEF0123456789 00:38:41.906 UUID: afe64faa-31b2-4012-8797-57cdc35b4a5b 00:38:41.906 Thin Provisioning: Not Supported 00:38:41.906 Per-NS Atomic Units: Yes 00:38:41.906 Atomic Boundary Size (Normal): 0 00:38:41.906 Atomic Boundary Size (PFail): 0 00:38:41.906 Atomic Boundary Offset: 0 00:38:41.906 Maximum Single Source Range Length: 65535 00:38:41.906 Maximum Copy Length: 65535 00:38:41.906 Maximum Source Range Count: 1 00:38:41.906 NGUID/EUI64 Never Reused: No 00:38:41.906 Namespace Write Protected: No 00:38:41.906 Number of LBA Formats: 1 00:38:41.906 Current LBA Format: LBA Format #00 00:38:41.906 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:41.906 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:41.906 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:41.906 rmmod nvme_tcp 00:38:42.167 rmmod nvme_fabrics 00:38:42.167 rmmod nvme_keyring 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2427108 ']' 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2427108 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 2427108 ']' 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 2427108 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2427108 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2427108' 00:38:42.167 killing process with pid 2427108 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 2427108 00:38:42.167 11:47:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 2427108 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:42.167 11:47:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.715 11:47:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:44.715 00:38:44.715 real 0m11.304s 00:38:44.715 user 0m8.236s 00:38:44.715 sys 0m5.915s 00:38:44.715 11:47:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:44.715 11:47:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:38:44.715 ************************************ 00:38:44.715 END TEST nvmf_identify 00:38:44.715 ************************************ 00:38:44.715 11:47:13 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:44.715 11:47:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:44.715 11:47:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:44.715 11:47:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:44.715 ************************************ 00:38:44.715 START TEST nvmf_perf 00:38:44.715 ************************************ 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:38:44.715 * Looking for test storage... 00:38:44.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:38:44.715 11:47:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:52.896 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:52.896 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:52.896 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:52.896 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:52.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:38:52.896 00:38:52.896 --- 10.0.0.2 ping statistics --- 00:38:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.896 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:38:52.896 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:52.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:38:52.896 00:38:52.896 --- 10.0.0.1 ping statistics --- 00:38:52.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.897 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2431462 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2431462 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 2431462 ']' 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:52.897 [2024-06-10 11:47:20.752694] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:38:52.897 [2024-06-10 11:47:20.752742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.897 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.897 [2024-06-10 11:47:20.819979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:52.897 [2024-06-10 11:47:20.886156] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.897 [2024-06-10 11:47:20.886191] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.897 [2024-06-10 11:47:20.886199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.897 [2024-06-10 11:47:20.886206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.897 [2024-06-10 11:47:20.886213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.897 [2024-06-10 11:47:20.886345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.897 [2024-06-10 11:47:20.886462] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.897 [2024-06-10 11:47:20.886621] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.897 [2024-06-10 11:47:20.886622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:52.897 11:47:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:38:52.897 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:53.158 11:47:21 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:38:53.158 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:38:53.158 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:38:53.158 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:38:53.158 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:38:53.417 [2024-06-10 11:47:22.188918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.417 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:53.676 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:53.676 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:53.937 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:38:53.937 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:53.937 11:47:22 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:54.197 [2024-06-10 11:47:23.064079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.197 11:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:54.457 11:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:38:54.457 11:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:38:54.457 11:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:38:54.457 11:47:23 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:38:55.843 Initializing NVMe Controllers 00:38:55.843 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:38:55.843 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:38:55.843 Initialization complete. Launching workers. 00:38:55.843 ======================================================== 00:38:55.843 Latency(us) 00:38:55.843 Device Information : IOPS MiB/s Average min max 00:38:55.843 PCIE (0000:65:00.0) NSID 1 from core 0: 79348.99 309.96 402.69 13.15 7193.12 00:38:55.843 ======================================================== 00:38:55.843 Total : 79348.99 309.96 402.69 13.15 7193.12 00:38:55.843 00:38:55.843 11:47:24 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:55.843 EAL: No free 2048 kB hugepages reported on node 1 00:38:57.226 Initializing NVMe Controllers 00:38:57.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:57.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:57.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:57.226 Initialization complete. Launching workers. 00:38:57.226 ======================================================== 00:38:57.226 Latency(us) 00:38:57.226 Device Information : IOPS MiB/s Average min max 00:38:57.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.00 0.30 13331.73 206.71 45869.34 00:38:57.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.00 0.27 15022.97 7959.53 47888.79 00:38:57.226 ======================================================== 00:38:57.226 Total : 146.00 0.57 14119.43 206.71 47888.79 00:38:57.226 00:38:57.226 11:47:25 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:57.226 EAL: No free 2048 kB hugepages reported on node 1 00:38:58.169 Initializing NVMe Controllers 00:38:58.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:58.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:58.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:58.169 Initialization complete. Launching workers. 00:38:58.169 ======================================================== 00:38:58.170 Latency(us) 00:38:58.170 Device Information : IOPS MiB/s Average min max 00:38:58.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8753.99 34.20 3672.83 529.05 8268.80 00:38:58.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3818.00 14.91 8421.39 6215.71 16216.44 00:38:58.170 ======================================================== 00:38:58.170 Total : 12571.99 49.11 5114.92 529.05 16216.44 00:38:58.170 00:38:58.170 11:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:38:58.170 11:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:38:58.170 11:47:27 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:58.170 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.716 Initializing NVMe Controllers 00:39:00.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:00.716 Controller IO queue size 128, less than required. 00:39:00.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.716 Controller IO queue size 128, less than required. 00:39:00.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:00.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:00.716 Initialization complete. Launching workers. 00:39:00.716 ======================================================== 00:39:00.716 Latency(us) 00:39:00.716 Device Information : IOPS MiB/s Average min max 00:39:00.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1138.72 284.68 114860.15 67088.48 185005.51 00:39:00.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 594.85 148.71 222345.20 55084.87 332468.57 00:39:00.716 ======================================================== 00:39:00.716 Total : 1733.57 433.39 151742.27 55084.87 332468.57 00:39:00.716 00:39:00.716 11:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:39:00.716 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.977 No valid NVMe controllers or AIO or URING devices found 00:39:00.977 Initializing NVMe Controllers 00:39:00.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:00.977 Controller IO queue size 128, less than required. 00:39:00.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:39:00.977 Controller IO queue size 128, less than required. 00:39:00.977 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:00.977 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:39:00.977 WARNING: Some requested NVMe devices were skipped 00:39:00.977 11:47:29 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:39:00.977 EAL: No free 2048 kB hugepages reported on node 1 00:39:03.522 Initializing NVMe Controllers 00:39:03.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:03.522 Controller IO queue size 128, less than required. 00:39:03.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:03.522 Controller IO queue size 128, less than required. 00:39:03.522 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:03.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:03.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:03.522 Initialization complete. Launching workers. 00:39:03.522 00:39:03.522 ==================== 00:39:03.522 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:39:03.522 TCP transport: 00:39:03.522 polls: 33287 00:39:03.522 idle_polls: 14772 00:39:03.522 sock_completions: 18515 00:39:03.522 nvme_completions: 4599 00:39:03.522 submitted_requests: 6904 00:39:03.522 queued_requests: 1 00:39:03.522 00:39:03.522 ==================== 00:39:03.522 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:39:03.522 TCP transport: 00:39:03.522 polls: 33290 00:39:03.522 idle_polls: 13363 00:39:03.522 sock_completions: 19927 00:39:03.522 nvme_completions: 4599 00:39:03.522 submitted_requests: 6788 00:39:03.522 queued_requests: 1 00:39:03.522 ======================================================== 00:39:03.522 Latency(us) 00:39:03.522 Device Information : IOPS MiB/s Average min max 00:39:03.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.17 287.04 115195.44 50654.47 183049.04 00:39:03.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1148.17 287.04 113954.39 48262.54 160328.11 00:39:03.522 ======================================================== 00:39:03.522 Total : 2296.35 574.09 114574.91 48262.54 183049.04 00:39:03.522 00:39:03.522 11:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:39:03.522 11:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:03.782 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:03.782 rmmod nvme_tcp 00:39:03.782 rmmod nvme_fabrics 00:39:03.782 rmmod nvme_keyring 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2431462 ']' 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2431462 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 2431462 ']' 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 2431462 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2431462 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2431462' 00:39:03.783 killing process with pid 2431462 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 2431462 00:39:03.783 11:47:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 2431462 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:05.695 11:47:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:05.696 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:05.696 11:47:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.304 11:47:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:08.304 00:39:08.304 real 0m23.437s 00:39:08.304 user 0m56.465s 00:39:08.304 sys 0m7.977s 00:39:08.304 11:47:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:08.304 11:47:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:39:08.304 ************************************ 00:39:08.304 END TEST nvmf_perf 00:39:08.304 ************************************ 00:39:08.304 11:47:36 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:08.304 11:47:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:08.304 11:47:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:08.304 11:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:08.304 ************************************ 00:39:08.304 START TEST nvmf_fio_host 00:39:08.304 ************************************ 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:39:08.305 * Looking for test storage... 00:39:08.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:39:08.305 11:47:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:14.900 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:14.901 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:14.901 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:14.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:14.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:14.901 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:15.163 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:15.163 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:15.163 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:15.163 11:47:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:15.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:15.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:39:15.163 00:39:15.163 --- 10.0.0.2 ping statistics --- 00:39:15.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.163 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:15.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:15.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:39:15.163 00:39:15.163 --- 10.0.0.1 ping statistics --- 00:39:15.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:15.163 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2438486 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2438486 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 2438486 ']' 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:15.163 11:47:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.424 [2024-06-10 11:47:44.138715] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:39:15.424 [2024-06-10 11:47:44.138783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.424 EAL: No free 2048 kB hugepages reported on node 1 00:39:15.424 [2024-06-10 11:47:44.209454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:15.424 [2024-06-10 11:47:44.284184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.424 [2024-06-10 11:47:44.284221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.424 [2024-06-10 11:47:44.284229] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.424 [2024-06-10 11:47:44.284235] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.424 [2024-06-10 11:47:44.284241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.424 [2024-06-10 11:47:44.284393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:15.424 [2024-06-10 11:47:44.284511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:15.424 [2024-06-10 11:47:44.284695] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:39:15.424 [2024-06-10 11:47:44.284697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:16.366 [2024-06-10 11:47:45.196056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.366 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:39:16.627 Malloc1 00:39:16.627 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:16.886 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:17.147 11:47:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:17.147 [2024-06-10 11:47:46.106424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:17.408 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:39:17.690 11:47:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:39:17.955 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:39:17.955 fio-3.35 00:39:17.955 Starting 1 thread 00:39:17.955 EAL: No free 2048 kB hugepages reported on node 1 00:39:20.501 00:39:20.501 test: (groupid=0, jobs=1): err= 0: pid=2439066: Mon Jun 10 11:47:49 2024 00:39:20.501 read: IOPS=9774, BW=38.2MiB/s (40.0MB/s)(76.6MiB/2006msec) 00:39:20.501 slat (usec): min=2, max=210, avg= 2.17, stdev= 2.08 00:39:20.501 clat (usec): min=2840, max=12525, avg=7211.18, stdev=520.33 00:39:20.501 lat (usec): min=2875, max=12527, avg=7213.35, stdev=520.13 00:39:20.501 clat percentiles (usec): 00:39:20.501 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:39:20.501 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7308], 00:39:20.501 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7832], 95.00th=[ 7963], 00:39:20.501 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[10814], 99.95th=[11863], 00:39:20.501 | 99.99th=[12518] 00:39:20.501 bw ( KiB/s): min=37920, max=39760, per=99.95%, avg=39078.00, stdev=831.01, samples=4 00:39:20.501 iops : min= 9480, max= 9940, avg=9769.50, stdev=207.75, samples=4 00:39:20.501 write: IOPS=9784, BW=38.2MiB/s (40.1MB/s)(76.7MiB/2006msec); 0 zone resets 00:39:20.501 slat (usec): min=2, max=205, avg= 2.27, stdev= 1.62 00:39:20.501 clat (usec): min=2267, max=11824, avg=5783.01, stdev=441.57 00:39:20.501 lat (usec): min=2285, max=11826, avg=5785.27, stdev=441.42 00:39:20.501 clat percentiles (usec): 00:39:20.501 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:39:20.501 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:39:20.501 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6390], 00:39:20.501 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 9110], 99.95th=[10159], 00:39:20.501 | 99.99th=[11207] 00:39:20.501 bw ( KiB/s): min=38536, max=39680, per=100.00%, avg=39138.00, stdev=471.23, samples=4 00:39:20.501 iops : min= 9634, max= 9920, avg=9784.50, stdev=117.81, samples=4 00:39:20.501 lat (msec) : 4=0.13%, 10=99.78%, 20=0.09% 00:39:20.501 cpu : usr=70.52%, sys=26.48%, ctx=57, majf=0, minf=6 00:39:20.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:39:20.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:20.501 issued rwts: total=19607,19628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:20.501 00:39:20.501 Run status group 0 (all jobs): 00:39:20.501 READ: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.6MiB (80.3MB), run=2006-2006msec 00:39:20.501 WRITE: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=76.7MiB (80.4MB), run=2006-2006msec 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:39:20.501 11:47:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:39:20.501 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:39:20.501 fio-3.35 00:39:20.501 Starting 1 thread 00:39:20.501 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.049 00:39:23.049 test: (groupid=0, jobs=1): err= 0: pid=2439880: Mon Jun 10 11:47:51 2024 00:39:23.049 read: IOPS=9031, BW=141MiB/s (148MB/s)(284MiB/2009msec) 00:39:23.049 slat (usec): min=3, max=108, avg= 3.67, stdev= 1.63 00:39:23.049 clat (usec): min=2754, max=16597, avg=8685.96, stdev=2224.17 00:39:23.049 lat (usec): min=2757, max=16601, avg=8689.63, stdev=2224.36 00:39:23.049 clat percentiles (usec): 00:39:23.049 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6652], 00:39:23.049 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:39:23.049 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[11731], 95.00th=[12256], 00:39:23.049 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15795], 99.95th=[15795], 00:39:23.049 | 99.99th=[16581] 00:39:23.049 bw ( KiB/s): min=61248, max=88992, per=49.58%, avg=71640.00, stdev=13013.06, samples=4 00:39:23.049 iops : min= 3828, max= 5562, avg=4477.50, stdev=813.32, samples=4 00:39:23.049 write: IOPS=5445, BW=85.1MiB/s (89.2MB/s)(146MiB/1711msec); 0 zone resets 00:39:23.049 slat (usec): min=40, max=496, avg=41.28, stdev= 9.76 00:39:23.049 clat (usec): min=3464, max=15266, avg=9489.78, stdev=1528.11 00:39:23.049 lat (usec): min=3504, max=15405, avg=9531.07, stdev=1529.91 00:39:23.049 clat percentiles (usec): 00:39:23.049 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8291], 00:39:23.049 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:39:23.049 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12256], 00:39:23.049 | 99.00th=[13566], 99.50th=[13960], 99.90th=[15008], 99.95th=[15139], 00:39:23.049 | 99.99th=[15270] 00:39:23.049 bw ( KiB/s): min=64224, max=92448, per=85.55%, avg=74536.00, stdev=13345.81, samples=4 00:39:23.049 iops : min= 4014, max= 5778, avg=4658.50, stdev=834.11, samples=4 00:39:23.049 lat (msec) : 4=0.36%, 10=69.58%, 20=30.06% 00:39:23.049 cpu : usr=83.67%, sys=13.70%, ctx=17, majf=0, minf=23 00:39:23.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:39:23.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:23.049 issued rwts: total=18144,9317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:23.049 00:39:23.049 Run status group 0 (all jobs): 00:39:23.049 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=284MiB (297MB), run=2009-2009msec 00:39:23.049 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=146MiB (153MB), run=1711-1711msec 00:39:23.049 11:47:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:23.049 11:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:39:23.049 11:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:23.049 11:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:39:23.049 11:47:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:23.310 rmmod nvme_tcp 00:39:23.310 rmmod nvme_fabrics 00:39:23.310 rmmod nvme_keyring 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2438486 ']' 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2438486 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 2438486 ']' 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 2438486 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2438486 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2438486' 00:39:23.310 killing process with pid 2438486 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 2438486 00:39:23.310 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 2438486 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:23.572 11:47:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.487 11:47:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:25.487 00:39:25.487 real 0m17.603s 00:39:25.487 user 1m11.082s 00:39:25.487 sys 0m7.196s 00:39:25.487 11:47:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:25.487 11:47:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.487 ************************************ 00:39:25.487 END TEST nvmf_fio_host 00:39:25.487 ************************************ 00:39:25.487 11:47:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:25.487 11:47:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:25.487 11:47:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:25.487 11:47:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:25.748 ************************************ 00:39:25.748 START TEST nvmf_failover 00:39:25.748 ************************************ 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:39:25.748 * Looking for test storage... 00:39:25.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.748 11:47:54 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:39:25.749 11:47:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:33.891 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:33.891 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:33.891 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:33.891 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:33.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:33.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:39:33.891 00:39:33.891 --- 10.0.0.2 ping statistics --- 00:39:33.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.891 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:33.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:33.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:39:33.891 00:39:33.891 --- 10.0.0.1 ping statistics --- 00:39:33.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:33.891 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:33.891 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2444511 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2444511 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2444511 ']' 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:33.892 11:48:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:33.892 [2024-06-10 11:48:01.767840] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:39:33.892 [2024-06-10 11:48:01.767903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:33.892 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.892 [2024-06-10 11:48:01.841114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:33.892 [2024-06-10 11:48:01.916661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:33.892 [2024-06-10 11:48:01.916706] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:33.892 [2024-06-10 11:48:01.916714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:33.892 [2024-06-10 11:48:01.916721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:33.892 [2024-06-10 11:48:01.916726] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:33.892 [2024-06-10 11:48:01.916887] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.892 [2024-06-10 11:48:01.917102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.892 [2024-06-10 11:48:01.917102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.892 11:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:34.152 [2024-06-10 11:48:02.878039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.152 11:48:02 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:39:34.413 Malloc0 00:39:34.413 11:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:34.413 11:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:34.673 11:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:34.933 [2024-06-10 11:48:03.781722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:34.933 11:48:03 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:35.194 [2024-06-10 11:48:03.998285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:35.194 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:35.454 [2024-06-10 11:48:04.206984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:35.454 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2445010 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2445010 /var/tmp/bdevperf.sock 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2445010 ']' 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:35.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:35.455 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:35.716 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:35.716 11:48:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:39:35.716 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:35.976 NVMe0n1 00:39:35.976 11:48:04 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:36.237 00:39:36.498 11:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2445269 00:39:36.498 11:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:36.498 11:48:05 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:39:37.441 11:48:06 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:37.441 [2024-06-10 11:48:06.411518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411603] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411635] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411639] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411648] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.441 [2024-06-10 11:48:06.411742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411770] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411779] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411800] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411835] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.442 [2024-06-10 11:48:06.411871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e339e0 is same with the state(5) to be set 00:39:37.702 11:48:06 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:39:41.003 11:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:41.003 00:39:41.003 11:48:09 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:41.264 [2024-06-10 11:48:09.989372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989491] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989555] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989731] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989745] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989785] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989791] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.264 [2024-06-10 11:48:09.989798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 [2024-06-10 11:48:09.989871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e350e0 is same with the state(5) to be set 00:39:41.265 11:48:10 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:39:44.568 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:44.568 [2024-06-10 11:48:13.217212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:44.568 11:48:13 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:39:45.510 11:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:45.510 11:48:14 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2445269 00:39:52.174 0 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2445010 ']' 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2445010' 00:39:52.174 killing process with pid 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2445010 00:39:52.174 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:52.174 [2024-06-10 11:48:04.278748] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:39:52.174 [2024-06-10 11:48:04.278806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2445010 ] 00:39:52.174 EAL: No free 2048 kB hugepages reported on node 1 00:39:52.174 [2024-06-10 11:48:04.337715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.174 [2024-06-10 11:48:04.401561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.174 Running I/O for 15 seconds... 00:39:52.174 [2024-06-10 11:48:06.414394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.174 [2024-06-10 11:48:06.414927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.174 [2024-06-10 11:48:06.414933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.414942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.175 [2024-06-10 11:48:06.414949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.414959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.175 [2024-06-10 11:48:06.414967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.414976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.175 [2024-06-10 11:48:06.414983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.414993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.175 [2024-06-10 11:48:06.415557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.175 [2024-06-10 11:48:06.415564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.415979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.176 [2024-06-10 11:48:06.415987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98688 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98704 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98712 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98720 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98744 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.176 [2024-06-10 11:48:06.416230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.176 [2024-06-10 11:48:06.416236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.176 [2024-06-10 11:48:06.416241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 PRP1 0x0 PRP2 0x0 00:39:52.176 [2024-06-10 11:48:06.416248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98776 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98792 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98800 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98808 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98816 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98832 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98880 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98888 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98896 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.416730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.416735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98904 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.416742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.416749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.425705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.425733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.425744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.425755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.425761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.425767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.425774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.425781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.425787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.425793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.425800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.425807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.425812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.177 [2024-06-10 11:48:06.425818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:39:52.177 [2024-06-10 11:48:06.425825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.177 [2024-06-10 11:48:06.425832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.177 [2024-06-10 11:48:06.425838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.178 [2024-06-10 11:48:06.425843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:39:52.178 [2024-06-10 11:48:06.425851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:06.425889] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x245ae50 was disconnected and freed. reset controller. 00:39:52.178 [2024-06-10 11:48:06.425898] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:52.178 [2024-06-10 11:48:06.425933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.178 [2024-06-10 11:48:06.425941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:06.425950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.178 [2024-06-10 11:48:06.425957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:06.425965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.178 [2024-06-10 11:48:06.425972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:06.425980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.178 [2024-06-10 11:48:06.425987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:06.425995] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:52.178 [2024-06-10 11:48:06.426026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c140 (9): Bad file descriptor 00:39:52.178 [2024-06-10 11:48:06.429588] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:52.178 [2024-06-10 11:48:06.508406] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:52.178 [2024-06-10 11:48:09.991478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.178 [2024-06-10 11:48:09.991514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.178 [2024-06-10 11:48:09.991910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.178 [2024-06-10 11:48:09.991918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.991927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.991934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.991943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.991950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.991959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.991967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.991976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.991983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.991992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.991999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.179 [2024-06-10 11:48:09.992501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.179 [2024-06-10 11:48:09.992581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.179 [2024-06-10 11:48:09.992591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.180 [2024-06-10 11:48:09.992868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.992985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.992994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.180 [2024-06-10 11:48:09.993257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.180 [2024-06-10 11:48:09.993264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.181 [2024-06-10 11:48:09.993280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.181 [2024-06-10 11:48:09.993296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.181 [2024-06-10 11:48:09.993313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.181 [2024-06-10 11:48:09.993331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111744 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111752 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111760 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111768 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111776 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111784 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111792 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111816 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111824 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111832 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111840 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111848 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111856 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111864 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:09.993783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:09.993791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:09.993797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111872 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:09.993804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.003895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:10.003924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:10.003934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111880 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:10.003943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.003951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.181 [2024-06-10 11:48:10.003956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.181 [2024-06-10 11:48:10.003962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111888 len:8 PRP1 0x0 PRP2 0x0 00:39:52.181 [2024-06-10 11:48:10.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.004008] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2605630 was disconnected and freed. reset controller. 00:39:52.181 [2024-06-10 11:48:10.004018] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:39:52.181 [2024-06-10 11:48:10.004045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.181 [2024-06-10 11:48:10.004054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.004064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.181 [2024-06-10 11:48:10.004071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.004078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.181 [2024-06-10 11:48:10.004085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.181 [2024-06-10 11:48:10.004094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.182 [2024-06-10 11:48:10.004101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:10.004108] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:52.182 [2024-06-10 11:48:10.004137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c140 (9): Bad file descriptor 00:39:52.182 [2024-06-10 11:48:10.007709] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:52.182 [2024-06-10 11:48:10.084377] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:52.182 [2024-06-10 11:48:14.442928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.442971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.442989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.442997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.182 [2024-06-10 11:48:14.443097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.182 [2024-06-10 11:48:14.443554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.182 [2024-06-10 11:48:14.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.443991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.443999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.183 [2024-06-10 11:48:14.444143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.183 [2024-06-10 11:48:14.444152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.184 [2024-06-10 11:48:14.444159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.184 [2024-06-10 11:48:14.444801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.184 [2024-06-10 11:48:14.444810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.444991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.444999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.445006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:52.185 [2024-06-10 11:48:14.445023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:39:52.185 [2024-06-10 11:48:14.445049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:39:52.185 [2024-06-10 11:48:14.445056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57328 len:8 PRP1 0x0 PRP2 0x0 00:39:52.185 [2024-06-10 11:48:14.445063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445100] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2605810 was disconnected and freed. reset controller. 00:39:52.185 [2024-06-10 11:48:14.445109] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:39:52.185 [2024-06-10 11:48:14.445126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.185 [2024-06-10 11:48:14.445134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.185 [2024-06-10 11:48:14.445150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.185 [2024-06-10 11:48:14.445165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:52.185 [2024-06-10 11:48:14.445180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:52.185 [2024-06-10 11:48:14.445187] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:52.185 [2024-06-10 11:48:14.448718] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:52.185 [2024-06-10 11:48:14.448741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243c140 (9): Bad file descriptor 00:39:52.185 [2024-06-10 11:48:14.612381] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:52.185 00:39:52.185 Latency(us) 00:39:52.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.185 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:52.185 Verification LBA range: start 0x0 length 0x4000 00:39:52.185 NVMe0n1 : 15.01 9072.81 35.44 779.35 0.00 12963.84 774.83 19223.89 00:39:52.185 =================================================================================================================== 00:39:52.185 Total : 9072.81 35.44 779.35 0.00 12963.84 774.83 19223.89 00:39:52.185 Received shutdown signal, test time was about 15.000000 seconds 00:39:52.185 00:39:52.185 Latency(us) 00:39:52.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.185 =================================================================================================================== 00:39:52.185 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2448647 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2448647 /var/tmp/bdevperf.sock 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2448647 ']' 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:52.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:39:52.185 11:48:20 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:52.185 [2024-06-10 11:48:21.028815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:52.185 11:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:39:52.447 [2024-06-10 11:48:21.249386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:39:52.447 11:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:52.707 NVMe0n1 00:39:52.707 11:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:52.998 00:39:52.998 11:48:21 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:53.570 00:39:53.570 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:53.570 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:39:53.570 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:53.830 11:48:22 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:39:57.129 11:48:25 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:57.129 11:48:25 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:39:57.129 11:48:25 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2449679 00:39:57.129 11:48:25 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2449679 00:39:57.129 11:48:25 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:58.073 0 00:39:58.073 11:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:39:58.073 [2024-06-10 11:48:20.634528] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:39:58.073 [2024-06-10 11:48:20.634586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448647 ] 00:39:58.073 EAL: No free 2048 kB hugepages reported on node 1 00:39:58.073 [2024-06-10 11:48:20.692748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.073 [2024-06-10 11:48:20.757436] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.073 [2024-06-10 11:48:22.588250] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:39:58.073 [2024-06-10 11:48:22.588292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.073 [2024-06-10 11:48:22.588303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.073 [2024-06-10 11:48:22.588312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.073 [2024-06-10 11:48:22.588319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.073 [2024-06-10 11:48:22.588327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.073 [2024-06-10 11:48:22.588334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.073 [2024-06-10 11:48:22.588342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:58.073 [2024-06-10 11:48:22.588349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:58.073 [2024-06-10 11:48:22.588356] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:39:58.073 [2024-06-10 11:48:22.588383] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:58.073 [2024-06-10 11:48:22.588397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1666140 (9): Bad file descriptor 00:39:58.073 [2024-06-10 11:48:22.600158] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:58.073 Running I/O for 1 seconds... 00:39:58.073 00:39:58.073 Latency(us) 00:39:58.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:58.073 Verification LBA range: start 0x0 length 0x4000 00:39:58.073 NVMe0n1 : 1.01 9115.00 35.61 0.00 0.00 13979.66 2990.08 14090.24 00:39:58.073 =================================================================================================================== 00:39:58.073 Total : 9115.00 35.61 0.00 0.00 13979.66 2990.08 14090.24 00:39:58.073 11:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:58.073 11:48:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:39:58.334 11:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:58.595 11:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:39:58.595 11:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:39:58.856 11:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:59.118 11:48:27 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:40:02.420 11:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:40:02.420 11:48:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:40:02.420 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2448647 00:40:02.420 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2448647 ']' 00:40:02.420 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2448647 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2448647 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2448647' 00:40:02.421 killing process with pid 2448647 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2448647 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2448647 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:40:02.421 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:02.681 rmmod nvme_tcp 00:40:02.681 rmmod nvme_fabrics 00:40:02.681 rmmod nvme_keyring 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:02.681 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2444511 ']' 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2444511 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2444511 ']' 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2444511 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2444511 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2444511' 00:40:02.682 killing process with pid 2444511 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2444511 00:40:02.682 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2444511 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:02.942 11:48:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.851 11:48:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:04.851 00:40:04.851 real 0m39.350s 00:40:04.851 user 2m2.377s 00:40:04.851 sys 0m8.172s 00:40:04.851 11:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:04.851 11:48:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:40:04.851 ************************************ 00:40:04.851 END TEST nvmf_failover 00:40:04.851 ************************************ 00:40:05.113 11:48:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:05.113 11:48:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:05.113 11:48:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:05.113 11:48:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:05.113 ************************************ 00:40:05.113 START TEST nvmf_host_discovery 00:40:05.113 ************************************ 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:40:05.113 * Looking for test storage... 00:40:05.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.113 11:48:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.113 11:48:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.113 11:48:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:40:05.114 11:48:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:13.258 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:13.258 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:13.259 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:13.259 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:13.259 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.259 11:48:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:13.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:40:13.259 00:40:13.259 --- 10.0.0.2 ping statistics --- 00:40:13.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.259 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:13.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:40:13.259 00:40:13.259 --- 10.0.0.1 ping statistics --- 00:40:13.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.259 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2454858 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2454858 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2454858 ']' 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:13.259 11:48:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.259 [2024-06-10 11:48:41.394600] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:40:13.259 [2024-06-10 11:48:41.394677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.259 EAL: No free 2048 kB hugepages reported on node 1 00:40:13.259 [2024-06-10 11:48:41.465178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.259 [2024-06-10 11:48:41.529781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.259 [2024-06-10 11:48:41.529818] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.259 [2024-06-10 11:48:41.529825] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.259 [2024-06-10 11:48:41.529832] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.259 [2024-06-10 11:48:41.529837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.259 [2024-06-10 11:48:41.529862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.259 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:13.259 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:40:13.259 11:48:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:13.259 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:13.259 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 [2024-06-10 11:48:42.236568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 [2024-06-10 11:48:42.248708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 null0 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 null1 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2455187 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2455187 /tmp/host.sock 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2455187 ']' 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:40:13.521 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:13.521 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.521 [2024-06-10 11:48:42.333896] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:40:13.521 [2024-06-10 11:48:42.333943] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455187 ] 00:40:13.521 EAL: No free 2048 kB hugepages reported on node 1 00:40:13.521 [2024-06-10 11:48:42.391716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.521 [2024-06-10 11:48:42.455908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:13.782 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:13.783 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:14.043 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.044 [2024-06-10 11:48:42.894386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:14.044 11:48:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.044 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:14.304 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:40:14.305 11:48:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:40:14.876 [2024-06-10 11:48:43.615891] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:14.876 [2024-06-10 11:48:43.615913] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:14.876 [2024-06-10 11:48:43.615929] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:14.876 [2024-06-10 11:48:43.705186] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:40:15.135 [2024-06-10 11:48:43.929080] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:15.135 [2024-06-10 11:48:43.929103] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.395 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.396 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.657 [2024-06-10 11:48:44.442625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:15.657 [2024-06-10 11:48:44.443508] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:40:15.657 [2024-06-10 11:48:44.443532] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.657 [2024-06-10 11:48:44.532795] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:15.657 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:15.658 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:40:15.658 11:48:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:40:15.918 [2024-06-10 11:48:44.844287] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:15.918 [2024-06-10 11:48:44.844306] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:40:15.918 [2024-06-10 11:48:44.844311] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:16.861 [2024-06-10 11:48:45.722682] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:40:16.861 [2024-06-10 11:48:45.722703] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:40:16.861 [2024-06-10 11:48:45.730735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.861 [2024-06-10 11:48:45.730753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.861 [2024-06-10 11:48:45.730761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.861 [2024-06-10 11:48:45.730774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.861 [2024-06-10 11:48:45.730782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.861 [2024-06-10 11:48:45.730788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.861 [2024-06-10 11:48:45.730796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.861 [2024-06-10 11:48:45.730803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.861 [2024-06-10 11:48:45.730810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:16.861 [2024-06-10 11:48:45.740750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.861 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.861 [2024-06-10 11:48:45.750788] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.861 [2024-06-10 11:48:45.751204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.861 [2024-06-10 11:48:45.751219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.861 [2024-06-10 11:48:45.751227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.861 [2024-06-10 11:48:45.751239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.861 [2024-06-10 11:48:45.751249] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.861 [2024-06-10 11:48:45.751256] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.861 [2024-06-10 11:48:45.751263] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.861 [2024-06-10 11:48:45.751274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.861 [2024-06-10 11:48:45.760842] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.861 [2024-06-10 11:48:45.761044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.861 [2024-06-10 11:48:45.761060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.761067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.761079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.761090] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.761097] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.761108] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.761119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 [2024-06-10 11:48:45.770894] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.862 [2024-06-10 11:48:45.771255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.862 [2024-06-10 11:48:45.771268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.771276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.771287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.771297] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.771303] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.771310] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.771321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 [2024-06-10 11:48:45.780946] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.862 [2024-06-10 11:48:45.781314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.862 [2024-06-10 11:48:45.781326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.781333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.781344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.781354] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.781361] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.781368] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.781379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:16.862 [2024-06-10 11:48:45.790998] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:16.862 [2024-06-10 11:48:45.791353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.862 [2024-06-10 11:48:45.791365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.791372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.791383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.791393] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.791399] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.791406] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.791416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 [2024-06-10 11:48:45.801052] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.862 [2024-06-10 11:48:45.801279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.862 [2024-06-10 11:48:45.801291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.801298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.801309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.801320] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.801326] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.801333] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.801343] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 [2024-06-10 11:48:45.811104] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:16.862 [2024-06-10 11:48:45.811449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:16.862 [2024-06-10 11:48:45.811461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a8740 with addr=10.0.0.2, port=4420 00:40:16.862 [2024-06-10 11:48:45.811468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a8740 is same with the state(5) to be set 00:40:16.862 [2024-06-10 11:48:45.811479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a8740 (9): Bad file descriptor 00:40:16.862 [2024-06-10 11:48:45.811489] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:16.862 [2024-06-10 11:48:45.811495] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:16.862 [2024-06-10 11:48:45.811502] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:16.862 [2024-06-10 11:48:45.811512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:16.862 [2024-06-10 11:48:45.811998] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:40:16.862 [2024-06-10 11:48:45.812016] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:16.862 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:40:17.124 11:48:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:17.124 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.385 11:48:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.327 [2024-06-10 11:48:47.174879] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:40:18.327 [2024-06-10 11:48:47.174897] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:40:18.327 [2024-06-10 11:48:47.174909] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:40:18.327 [2024-06-10 11:48:47.264196] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:40:18.588 [2024-06-10 11:48:47.533709] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:40:18.588 [2024-06-10 11:48:47.533740] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.588 request: 00:40:18.588 { 00:40:18.588 "name": "nvme", 00:40:18.588 "trtype": "tcp", 00:40:18.588 "traddr": "10.0.0.2", 00:40:18.588 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:18.588 "adrfam": "ipv4", 00:40:18.588 "trsvcid": "8009", 00:40:18.588 "wait_for_attach": true, 00:40:18.588 "method": "bdev_nvme_start_discovery", 00:40:18.588 "req_id": 1 00:40:18.588 } 00:40:18.588 Got JSON-RPC error response 00:40:18.588 response: 00:40:18.588 { 00:40:18.588 "code": -17, 00:40:18.588 "message": "File exists" 00:40:18.588 } 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:18.588 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.850 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.850 request: 00:40:18.850 { 00:40:18.850 "name": "nvme_second", 00:40:18.850 "trtype": "tcp", 00:40:18.850 "traddr": "10.0.0.2", 00:40:18.850 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:18.850 "adrfam": "ipv4", 00:40:18.850 "trsvcid": "8009", 00:40:18.850 "wait_for_attach": true, 00:40:18.850 "method": "bdev_nvme_start_discovery", 00:40:18.850 "req_id": 1 00:40:18.850 } 00:40:18.850 Got JSON-RPC error response 00:40:18.850 response: 00:40:18.851 { 00:40:18.851 "code": -17, 00:40:18.851 "message": "File exists" 00:40:18.851 } 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:18.851 11:48:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:20.238 [2024-06-10 11:48:48.786414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:20.238 [2024-06-10 11:48:48.786442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a47b0 with addr=10.0.0.2, port=8010 00:40:20.238 [2024-06-10 11:48:48.786456] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:20.238 [2024-06-10 11:48:48.786463] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:20.238 [2024-06-10 11:48:48.786470] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:21.177 [2024-06-10 11:48:49.788674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:21.177 [2024-06-10 11:48:49.788696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a47b0 with addr=10.0.0.2, port=8010 00:40:21.177 [2024-06-10 11:48:49.788706] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:21.177 [2024-06-10 11:48:49.788713] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:21.177 [2024-06-10 11:48:49.788719] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:40:22.120 [2024-06-10 11:48:50.790743] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:40:22.120 request: 00:40:22.120 { 00:40:22.120 "name": "nvme_second", 00:40:22.120 "trtype": "tcp", 00:40:22.120 "traddr": "10.0.0.2", 00:40:22.120 "hostnqn": "nqn.2021-12.io.spdk:test", 00:40:22.120 "adrfam": "ipv4", 00:40:22.120 "trsvcid": "8010", 00:40:22.120 "attach_timeout_ms": 3000, 00:40:22.120 "method": "bdev_nvme_start_discovery", 00:40:22.120 "req_id": 1 00:40:22.120 } 00:40:22.120 Got JSON-RPC error response 00:40:22.120 response: 00:40:22.120 { 00:40:22.120 "code": -110, 00:40:22.120 "message": "Connection timed out" 00:40:22.120 } 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2455187 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:22.120 rmmod nvme_tcp 00:40:22.120 rmmod nvme_fabrics 00:40:22.120 rmmod nvme_keyring 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2454858 ']' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2454858 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 2454858 ']' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 2454858 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2454858 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2454858' 00:40:22.120 killing process with pid 2454858 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 2454858 00:40:22.120 11:48:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 2454858 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:22.382 11:48:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.326 11:48:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:24.326 00:40:24.326 real 0m19.286s 00:40:24.326 user 0m22.031s 00:40:24.326 sys 0m6.817s 00:40:24.326 11:48:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:24.326 11:48:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:24.326 ************************************ 00:40:24.326 END TEST nvmf_host_discovery 00:40:24.326 ************************************ 00:40:24.326 11:48:53 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:24.326 11:48:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:24.326 11:48:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:24.326 11:48:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:24.326 ************************************ 00:40:24.326 START TEST nvmf_host_multipath_status 00:40:24.326 ************************************ 00:40:24.326 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:40:24.588 * Looking for test storage... 00:40:24.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:40:24.588 11:48:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:31.176 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:31.176 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:31.176 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:31.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:31.176 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:31.177 11:48:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:31.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:40:31.177 00:40:31.177 --- 10.0.0.2 ping statistics --- 00:40:31.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.177 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:40:31.177 00:40:31.177 --- 10.0.0.1 ping statistics --- 00:40:31.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.177 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:31.177 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2461031 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2461031 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2461031 ']' 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:31.438 11:49:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:31.438 [2024-06-10 11:49:00.237459] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:40:31.438 [2024-06-10 11:49:00.237529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:31.438 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.438 [2024-06-10 11:49:00.308140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:31.438 [2024-06-10 11:49:00.381603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:31.438 [2024-06-10 11:49:00.381642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:31.438 [2024-06-10 11:49:00.381650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:31.438 [2024-06-10 11:49:00.381656] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:31.438 [2024-06-10 11:49:00.381662] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:31.438 [2024-06-10 11:49:00.381789] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:31.438 [2024-06-10 11:49:00.381947] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2461031 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:32.381 [2024-06-10 11:49:01.313724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:32.381 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:32.641 Malloc0 00:40:32.641 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:40:32.902 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:33.164 11:49:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:33.424 [2024-06-10 11:49:02.137825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:33.424 [2024-06-10 11:49:02.338367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2461403 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2461403 /var/tmp/bdevperf.sock 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2461403 ']' 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:33.424 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:40:33.685 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:33.685 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:40:33.685 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:40:33.949 11:49:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:40:34.219 Nvme0n1 00:40:34.219 11:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:34.795 Nvme0n1 00:40:34.795 11:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:40:34.795 11:49:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:40:36.707 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:40:36.707 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:36.967 11:49:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:37.228 11:49:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:40:38.170 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:40:38.170 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:38.170 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.170 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:38.431 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.431 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:38.431 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.431 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:38.691 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:38.691 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:38.692 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.692 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:38.953 11:49:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:39.214 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:39.214 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:39.214 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:39.214 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:39.475 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:39.475 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:40:39.475 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:39.735 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:39.995 11:49:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:40:40.935 11:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:40:40.935 11:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:40.935 11:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:40.935 11:49:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:41.197 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:41.197 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:41.197 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.197 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.457 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:41.719 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.719 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:41.719 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:41.719 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.979 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:41.979 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:41.979 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:41.979 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:42.240 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:42.240 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:40:42.240 11:49:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:42.240 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:42.501 11:49:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:40:43.443 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:40:43.443 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.704 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:43.964 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:43.964 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:43.964 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:43.964 11:49:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:44.225 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.225 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:44.225 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.225 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:44.485 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.485 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:44.485 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.485 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:44.746 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:44.746 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:44.746 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:44.746 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:45.006 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:45.006 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:40:45.006 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:45.006 11:49:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:45.266 11:49:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:40:46.207 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:40:46.207 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:46.207 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.207 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:46.467 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.467 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:46.467 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.467 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:46.727 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:46.727 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:46.727 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.727 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:46.988 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:46.988 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:46.988 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:46.988 11:49:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:47.249 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:47.249 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:47.249 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.249 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:40:47.510 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:47.771 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:40:48.032 11:49:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:40:48.975 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:40:48.975 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:48.976 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:48.976 11:49:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:49.237 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:49.237 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:40:49.237 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.237 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:49.534 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:49.534 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:49.534 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.534 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:49.818 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:50.080 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:50.080 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:40:50.080 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:50.080 11:49:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:50.340 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:50.340 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:40:50.340 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:50.600 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:50.861 11:49:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:40:51.801 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:40:51.801 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:51.801 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:51.801 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:52.061 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:52.061 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:52.061 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.061 11:49:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.322 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:52.582 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:52.582 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:40:52.582 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.582 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:52.843 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:52.843 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:52.843 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:52.843 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:53.103 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:53.103 11:49:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:40:53.368 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:40:53.368 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:53.628 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:53.628 11:49:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.013 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:55.274 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.274 11:49:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:55.274 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.535 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.535 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:55.535 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.535 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:55.795 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:55.795 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:55.795 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:55.795 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:56.056 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:56.056 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:40:56.056 11:49:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:56.056 11:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:40:56.316 11:49:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:40:57.257 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:40:57.257 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:40:57.257 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.257 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:40:57.518 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:40:57.518 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:40:57.518 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.518 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:40:57.778 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:57.778 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:40:57.778 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:57.778 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:40:58.039 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.039 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:40:58.039 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.039 11:49:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:40:58.299 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.299 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:40:58.299 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.299 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:40:58.560 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:40:58.561 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:58.821 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:40:59.082 11:49:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:41:00.024 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:41:00.024 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:41:00.024 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.024 11:49:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:00.284 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.284 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:41:00.284 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.284 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:00.545 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.545 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:00.545 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.545 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:00.807 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:01.068 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:01.068 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:41:01.068 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:01.068 11:49:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:01.329 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:01.329 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:41:01.329 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:41:01.590 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:41:01.850 11:49:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:41:02.792 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:41:02.792 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:41:02.792 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:02.792 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:41:03.054 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.054 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:41:03.054 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.054 11:49:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:41:03.324 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:03.324 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:41:03.324 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.324 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.592 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:41:03.853 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:41:03.853 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:41:03.853 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:41:03.853 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:41:04.113 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:41:04.113 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2461403 00:41:04.114 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2461403 ']' 00:41:04.114 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2461403 00:41:04.114 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:41:04.114 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:04.114 11:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2461403 00:41:04.114 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:41:04.114 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:41:04.114 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2461403' 00:41:04.114 killing process with pid 2461403 00:41:04.114 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2461403 00:41:04.114 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2461403 00:41:04.379 Connection closed with partial response: 00:41:04.379 00:41:04.379 00:41:04.379 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2461403 00:41:04.379 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:41:04.379 [2024-06-10 11:49:02.382505] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:41:04.379 [2024-06-10 11:49:02.382561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2461403 ] 00:41:04.379 EAL: No free 2048 kB hugepages reported on node 1 00:41:04.379 [2024-06-10 11:49:02.432749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.379 [2024-06-10 11:49:02.485101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:41:04.379 Running I/O for 90 seconds... 00:41:04.379 [2024-06-10 11:49:16.661603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:04.379 [2024-06-10 11:49:16.661882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.379 [2024-06-10 11:49:16.661887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.380 [2024-06-10 11:49:16.662265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.380 [2024-06-10 11:49:16.662282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:04.380 [2024-06-10 11:49:16.662731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.380 [2024-06-10 11:49:16.662736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.662750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.662768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.662773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.662787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.662806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.662810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.663301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.663434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.381 [2024-06-10 11:49:16.663439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.664978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.381 [2024-06-10 11:49:16.665190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.381 [2024-06-10 11:49:16.665196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:16.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:16.665941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:16.665946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:30.623917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.382 [2024-06-10 11:49:30.623957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:30.623988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:30.623995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:30.624006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:30.624011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:04.382 [2024-06-10 11:49:30.624022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.382 [2024-06-10 11:49:30.624027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.624983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.624993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.624998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.625091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.625617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:04.383 [2024-06-10 11:49:30.625633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.383 [2024-06-10 11:49:30.625649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:04.383 [2024-06-10 11:49:30.625659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:04.384 [2024-06-10 11:49:30.625664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:04.384 Received shutdown signal, test time was about 29.291525 seconds 00:41:04.384 00:41:04.384 Latency(us) 00:41:04.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:04.384 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:04.384 Verification LBA range: start 0x0 length 0x4000 00:41:04.384 Nvme0n1 : 29.29 9736.16 38.03 0.00 0.00 13127.06 576.85 3019898.88 00:41:04.384 =================================================================================================================== 00:41:04.384 Total : 9736.16 38.03 0.00 0.00 13127.06 576.85 3019898.88 00:41:04.384 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:04.645 rmmod nvme_tcp 00:41:04.645 rmmod nvme_fabrics 00:41:04.645 rmmod nvme_keyring 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2461031 ']' 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2461031 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2461031 ']' 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2461031 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2461031 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2461031' 00:41:04.645 killing process with pid 2461031 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2461031 00:41:04.645 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2461031 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:04.906 11:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:06.820 11:49:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:06.820 00:41:06.820 real 0m42.448s 00:41:06.820 user 1m54.930s 00:41:06.820 sys 0m11.089s 00:41:06.820 11:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:06.820 11:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:41:06.820 ************************************ 00:41:06.820 END TEST nvmf_host_multipath_status 00:41:06.820 ************************************ 00:41:06.820 11:49:35 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:06.820 11:49:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:06.820 11:49:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:06.820 11:49:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:06.820 ************************************ 00:41:06.820 START TEST nvmf_discovery_remove_ifc 00:41:06.820 ************************************ 00:41:06.820 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:41:07.082 * Looking for test storage... 00:41:07.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:07.082 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:41:07.083 11:49:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:15.225 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:15.225 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:15.225 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:15.225 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:15.225 11:49:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:15.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:15.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:41:15.225 00:41:15.225 --- 10.0.0.2 ping statistics --- 00:41:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.225 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:15.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:15.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:41:15.225 00:41:15.225 --- 10.0.0.1 ping statistics --- 00:41:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.225 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2471826 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2471826 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2471826 ']' 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:15.225 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.225 [2024-06-10 11:49:43.110649] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:41:15.225 [2024-06-10 11:49:43.110737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.225 EAL: No free 2048 kB hugepages reported on node 1 00:41:15.225 [2024-06-10 11:49:43.180417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.225 [2024-06-10 11:49:43.252790] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.225 [2024-06-10 11:49:43.252831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.225 [2024-06-10 11:49:43.252840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.225 [2024-06-10 11:49:43.252847] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.225 [2024-06-10 11:49:43.252857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.226 [2024-06-10 11:49:43.252879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.226 [2024-06-10 11:49:43.398253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.226 [2024-06-10 11:49:43.406401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:41:15.226 null0 00:41:15.226 [2024-06-10 11:49:43.438415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2471939 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2471939 /tmp/host.sock 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2471939 ']' 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:41:15.226 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.226 [2024-06-10 11:49:43.510562] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:41:15.226 [2024-06-10 11:49:43.510607] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471939 ] 00:41:15.226 EAL: No free 2048 kB hugepages reported on node 1 00:41:15.226 [2024-06-10 11:49:43.568309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.226 [2024-06-10 11:49:43.633041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:15.226 11:49:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:16.169 [2024-06-10 11:49:44.846845] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:16.169 [2024-06-10 11:49:44.846866] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:16.169 [2024-06-10 11:49:44.846879] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:16.169 [2024-06-10 11:49:44.935167] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:41:16.430 [2024-06-10 11:49:45.160077] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:16.430 [2024-06-10 11:49:45.160132] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:16.430 [2024-06-10 11:49:45.160153] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:16.430 [2024-06-10 11:49:45.160168] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:41:16.430 [2024-06-10 11:49:45.160188] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.430 [2024-06-10 11:49:45.206454] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a50ea0 was disconnected and freed. delete nvme_qpair. 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:16.430 11:49:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:17.833 11:49:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:18.778 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:18.778 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:18.778 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:18.778 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.778 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:18.779 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:18.779 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:18.779 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.779 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:18.779 11:49:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:19.723 11:49:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:20.665 11:49:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:22.051 [2024-06-10 11:49:50.600649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:41:22.052 [2024-06-10 11:49:50.600697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:22.052 [2024-06-10 11:49:50.600709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:22.052 [2024-06-10 11:49:50.600719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:22.052 [2024-06-10 11:49:50.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:22.052 [2024-06-10 11:49:50.600734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:22.052 [2024-06-10 11:49:50.600741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:22.052 [2024-06-10 11:49:50.600749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:22.052 [2024-06-10 11:49:50.600756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:22.052 [2024-06-10 11:49:50.600764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:22.052 [2024-06-10 11:49:50.600771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:22.052 [2024-06-10 11:49:50.600778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18220 is same with the state(5) to be set 00:41:22.052 [2024-06-10 11:49:50.610673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18220 (9): Bad file descriptor 00:41:22.052 [2024-06-10 11:49:50.620713] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:22.052 11:49:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:22.994 [2024-06-10 11:49:51.676706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:41:22.994 [2024-06-10 11:49:51.676752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a18220 with addr=10.0.0.2, port=4420 00:41:22.994 [2024-06-10 11:49:51.676766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18220 is same with the state(5) to be set 00:41:22.994 [2024-06-10 11:49:51.676795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18220 (9): Bad file descriptor 00:41:22.994 [2024-06-10 11:49:51.677153] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:41:22.994 [2024-06-10 11:49:51.677171] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:22.994 [2024-06-10 11:49:51.677178] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:41:22.994 [2024-06-10 11:49:51.677187] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:22.994 [2024-06-10 11:49:51.677203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:22.994 [2024-06-10 11:49:51.677212] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:41:22.994 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:22.994 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:41:22.994 11:49:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:23.936 [2024-06-10 11:49:52.679591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:41:23.936 [2024-06-10 11:49:52.679623] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:41:23.936 [2024-06-10 11:49:52.679646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:23.936 [2024-06-10 11:49:52.679656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:23.936 [2024-06-10 11:49:52.679666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:23.936 [2024-06-10 11:49:52.679678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:23.936 [2024-06-10 11:49:52.679686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:23.936 [2024-06-10 11:49:52.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:23.936 [2024-06-10 11:49:52.679702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:23.936 [2024-06-10 11:49:52.679709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:23.936 [2024-06-10 11:49:52.679718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:41:23.936 [2024-06-10 11:49:52.679725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:23.936 [2024-06-10 11:49:52.679732] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:41:23.936 [2024-06-10 11:49:52.680139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a176b0 (9): Bad file descriptor 00:41:23.936 [2024-06-10 11:49:52.681150] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:41:23.936 [2024-06-10 11:49:52.681160] nvme_ctrlr.c:1203:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:23.936 11:49:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:41:25.318 11:49:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:41:25.891 [2024-06-10 11:49:54.731905] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:41:25.891 [2024-06-10 11:49:54.731926] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:41:25.891 [2024-06-10 11:49:54.731939] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:41:25.891 [2024-06-10 11:49:54.820222] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:41:26.152 [2024-06-10 11:49:54.921130] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:41:26.152 [2024-06-10 11:49:54.921167] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:41:26.152 [2024-06-10 11:49:54.921187] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:41:26.152 [2024-06-10 11:49:54.921201] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:41:26.152 [2024-06-10 11:49:54.921209] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:41:26.152 [2024-06-10 11:49:54.929531] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a5b860 was disconnected and freed. delete nvme_qpair. 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:41:26.152 11:49:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2471939 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2471939 ']' 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2471939 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2471939 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2471939' 00:41:26.152 killing process with pid 2471939 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2471939 00:41:26.152 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2471939 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:26.413 rmmod nvme_tcp 00:41:26.413 rmmod nvme_fabrics 00:41:26.413 rmmod nvme_keyring 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2471826 ']' 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2471826 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2471826 ']' 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2471826 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2471826 00:41:26.413 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:26.414 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:26.414 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2471826' 00:41:26.414 killing process with pid 2471826 00:41:26.414 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2471826 00:41:26.414 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2471826 00:41:26.674 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:26.674 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:26.674 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:26.674 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:26.674 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:26.675 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.675 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:26.675 11:49:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:28.590 11:49:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:28.590 00:41:28.590 real 0m21.741s 00:41:28.590 user 0m25.414s 00:41:28.590 sys 0m6.479s 00:41:28.590 11:49:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:28.590 11:49:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:41:28.590 ************************************ 00:41:28.590 END TEST nvmf_discovery_remove_ifc 00:41:28.590 ************************************ 00:41:28.590 11:49:57 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:28.590 11:49:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:28.590 11:49:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:28.590 11:49:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:28.852 ************************************ 00:41:28.852 START TEST nvmf_identify_kernel_target 00:41:28.852 ************************************ 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:41:28.852 * Looking for test storage... 00:41:28.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:41:28.852 11:49:57 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:37.029 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:37.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:37.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:37.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:37.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:37.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:37.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.835 ms 00:41:37.030 00:41:37.030 --- 10.0.0.2 ping statistics --- 00:41:37.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.030 rtt min/avg/max/mdev = 0.835/0.835/0.835/0.000 ms 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:37.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:37.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:41:37.030 00:41:37.030 --- 10.0.0.1 ping statistics --- 00:41:37.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:37.030 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:37.030 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:37.031 11:50:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:37.031 11:50:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:37.031 11:50:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:39.578 Waiting for block devices as requested 00:41:39.578 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:39.578 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:39.578 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:39.840 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:39.840 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:39.840 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:40.100 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:40.100 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:40.100 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:40.361 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:40.361 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:40.361 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:40.361 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:40.623 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:40.623 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:40.623 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:40.885 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:41:40.885 No valid GPT data, bailing 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:41:40.885 00:41:40.885 Discovery Log Number of Records 2, Generation counter 2 00:41:40.885 =====Discovery Log Entry 0====== 00:41:40.885 trtype: tcp 00:41:40.885 adrfam: ipv4 00:41:40.885 subtype: current discovery subsystem 00:41:40.885 treq: not specified, sq flow control disable supported 00:41:40.885 portid: 1 00:41:40.885 trsvcid: 4420 00:41:40.885 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:41:40.885 traddr: 10.0.0.1 00:41:40.885 eflags: none 00:41:40.885 sectype: none 00:41:40.885 =====Discovery Log Entry 1====== 00:41:40.885 trtype: tcp 00:41:40.885 adrfam: ipv4 00:41:40.885 subtype: nvme subsystem 00:41:40.885 treq: not specified, sq flow control disable supported 00:41:40.885 portid: 1 00:41:40.885 trsvcid: 4420 00:41:40.885 subnqn: nqn.2016-06.io.spdk:testnqn 00:41:40.885 traddr: 10.0.0.1 00:41:40.885 eflags: none 00:41:40.885 sectype: none 00:41:40.885 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:41:40.885 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:41:40.885 EAL: No free 2048 kB hugepages reported on node 1 00:41:40.885 ===================================================== 00:41:40.885 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:41:40.885 ===================================================== 00:41:40.885 Controller Capabilities/Features 00:41:40.885 ================================ 00:41:40.885 Vendor ID: 0000 00:41:40.885 Subsystem Vendor ID: 0000 00:41:40.885 Serial Number: 182bfb6cb301bd061264 00:41:40.885 Model Number: Linux 00:41:40.885 Firmware Version: 6.7.0-68 00:41:40.885 Recommended Arb Burst: 0 00:41:40.885 IEEE OUI Identifier: 00 00 00 00:41:40.885 Multi-path I/O 00:41:40.885 May have multiple subsystem ports: No 00:41:40.885 May have multiple controllers: No 00:41:40.885 Associated with SR-IOV VF: No 00:41:40.885 Max Data Transfer Size: Unlimited 00:41:40.885 Max Number of Namespaces: 0 00:41:40.885 Max Number of I/O Queues: 1024 00:41:40.885 NVMe Specification Version (VS): 1.3 00:41:40.885 NVMe Specification Version (Identify): 1.3 00:41:40.885 Maximum Queue Entries: 1024 00:41:40.885 Contiguous Queues Required: No 00:41:40.885 Arbitration Mechanisms Supported 00:41:40.885 Weighted Round Robin: Not Supported 00:41:40.885 Vendor Specific: Not Supported 00:41:40.885 Reset Timeout: 7500 ms 00:41:40.885 Doorbell Stride: 4 bytes 00:41:40.885 NVM Subsystem Reset: Not Supported 00:41:40.885 Command Sets Supported 00:41:40.885 NVM Command Set: Supported 00:41:40.885 Boot Partition: Not Supported 00:41:40.885 Memory Page Size Minimum: 4096 bytes 00:41:40.885 Memory Page Size Maximum: 4096 bytes 00:41:40.885 Persistent Memory Region: Not Supported 00:41:40.885 Optional Asynchronous Events Supported 00:41:40.885 Namespace Attribute Notices: Not Supported 00:41:40.885 Firmware Activation Notices: Not Supported 00:41:40.885 ANA Change Notices: Not Supported 00:41:40.885 PLE Aggregate Log Change Notices: Not Supported 00:41:40.885 LBA Status Info Alert Notices: Not Supported 00:41:40.885 EGE Aggregate Log Change Notices: Not Supported 00:41:40.885 Normal NVM Subsystem Shutdown event: Not Supported 00:41:40.885 Zone Descriptor Change Notices: Not Supported 00:41:40.885 Discovery Log Change Notices: Supported 00:41:40.885 Controller Attributes 00:41:40.885 128-bit Host Identifier: Not Supported 00:41:40.885 Non-Operational Permissive Mode: Not Supported 00:41:40.885 NVM Sets: Not Supported 00:41:40.885 Read Recovery Levels: Not Supported 00:41:40.885 Endurance Groups: Not Supported 00:41:40.885 Predictable Latency Mode: Not Supported 00:41:40.885 Traffic Based Keep ALive: Not Supported 00:41:40.885 Namespace Granularity: Not Supported 00:41:40.885 SQ Associations: Not Supported 00:41:40.885 UUID List: Not Supported 00:41:40.885 Multi-Domain Subsystem: Not Supported 00:41:40.885 Fixed Capacity Management: Not Supported 00:41:40.885 Variable Capacity Management: Not Supported 00:41:40.885 Delete Endurance Group: Not Supported 00:41:40.885 Delete NVM Set: Not Supported 00:41:40.885 Extended LBA Formats Supported: Not Supported 00:41:40.885 Flexible Data Placement Supported: Not Supported 00:41:40.885 00:41:40.885 Controller Memory Buffer Support 00:41:40.885 ================================ 00:41:40.885 Supported: No 00:41:40.885 00:41:40.885 Persistent Memory Region Support 00:41:40.885 ================================ 00:41:40.885 Supported: No 00:41:40.885 00:41:40.885 Admin Command Set Attributes 00:41:40.885 ============================ 00:41:40.885 Security Send/Receive: Not Supported 00:41:40.885 Format NVM: Not Supported 00:41:40.885 Firmware Activate/Download: Not Supported 00:41:40.885 Namespace Management: Not Supported 00:41:40.885 Device Self-Test: Not Supported 00:41:40.885 Directives: Not Supported 00:41:40.885 NVMe-MI: Not Supported 00:41:40.885 Virtualization Management: Not Supported 00:41:40.885 Doorbell Buffer Config: Not Supported 00:41:40.885 Get LBA Status Capability: Not Supported 00:41:40.885 Command & Feature Lockdown Capability: Not Supported 00:41:40.885 Abort Command Limit: 1 00:41:40.885 Async Event Request Limit: 1 00:41:40.885 Number of Firmware Slots: N/A 00:41:40.885 Firmware Slot 1 Read-Only: N/A 00:41:40.885 Firmware Activation Without Reset: N/A 00:41:40.885 Multiple Update Detection Support: N/A 00:41:40.885 Firmware Update Granularity: No Information Provided 00:41:40.885 Per-Namespace SMART Log: No 00:41:40.885 Asymmetric Namespace Access Log Page: Not Supported 00:41:40.885 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:41:40.885 Command Effects Log Page: Not Supported 00:41:40.885 Get Log Page Extended Data: Supported 00:41:40.885 Telemetry Log Pages: Not Supported 00:41:40.885 Persistent Event Log Pages: Not Supported 00:41:40.885 Supported Log Pages Log Page: May Support 00:41:40.885 Commands Supported & Effects Log Page: Not Supported 00:41:40.885 Feature Identifiers & Effects Log Page:May Support 00:41:40.885 NVMe-MI Commands & Effects Log Page: May Support 00:41:40.885 Data Area 4 for Telemetry Log: Not Supported 00:41:40.885 Error Log Page Entries Supported: 1 00:41:40.885 Keep Alive: Not Supported 00:41:40.885 00:41:40.885 NVM Command Set Attributes 00:41:40.885 ========================== 00:41:40.885 Submission Queue Entry Size 00:41:40.885 Max: 1 00:41:40.885 Min: 1 00:41:40.885 Completion Queue Entry Size 00:41:40.886 Max: 1 00:41:40.886 Min: 1 00:41:40.886 Number of Namespaces: 0 00:41:40.886 Compare Command: Not Supported 00:41:40.886 Write Uncorrectable Command: Not Supported 00:41:40.886 Dataset Management Command: Not Supported 00:41:40.886 Write Zeroes Command: Not Supported 00:41:40.886 Set Features Save Field: Not Supported 00:41:40.886 Reservations: Not Supported 00:41:40.886 Timestamp: Not Supported 00:41:40.886 Copy: Not Supported 00:41:40.886 Volatile Write Cache: Not Present 00:41:40.886 Atomic Write Unit (Normal): 1 00:41:40.886 Atomic Write Unit (PFail): 1 00:41:40.886 Atomic Compare & Write Unit: 1 00:41:40.886 Fused Compare & Write: Not Supported 00:41:40.886 Scatter-Gather List 00:41:40.886 SGL Command Set: Supported 00:41:40.886 SGL Keyed: Not Supported 00:41:40.886 SGL Bit Bucket Descriptor: Not Supported 00:41:40.886 SGL Metadata Pointer: Not Supported 00:41:40.886 Oversized SGL: Not Supported 00:41:40.886 SGL Metadata Address: Not Supported 00:41:40.886 SGL Offset: Supported 00:41:40.886 Transport SGL Data Block: Not Supported 00:41:40.886 Replay Protected Memory Block: Not Supported 00:41:40.886 00:41:40.886 Firmware Slot Information 00:41:40.886 ========================= 00:41:40.886 Active slot: 0 00:41:40.886 00:41:40.886 00:41:40.886 Error Log 00:41:40.886 ========= 00:41:40.886 00:41:40.886 Active Namespaces 00:41:40.886 ================= 00:41:40.886 Discovery Log Page 00:41:40.886 ================== 00:41:40.886 Generation Counter: 2 00:41:40.886 Number of Records: 2 00:41:40.886 Record Format: 0 00:41:40.886 00:41:40.886 Discovery Log Entry 0 00:41:40.886 ---------------------- 00:41:40.886 Transport Type: 3 (TCP) 00:41:40.886 Address Family: 1 (IPv4) 00:41:40.886 Subsystem Type: 3 (Current Discovery Subsystem) 00:41:40.886 Entry Flags: 00:41:40.886 Duplicate Returned Information: 0 00:41:40.886 Explicit Persistent Connection Support for Discovery: 0 00:41:40.886 Transport Requirements: 00:41:40.886 Secure Channel: Not Specified 00:41:40.886 Port ID: 1 (0x0001) 00:41:40.886 Controller ID: 65535 (0xffff) 00:41:40.886 Admin Max SQ Size: 32 00:41:40.886 Transport Service Identifier: 4420 00:41:40.886 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:41:40.886 Transport Address: 10.0.0.1 00:41:40.886 Discovery Log Entry 1 00:41:40.886 ---------------------- 00:41:40.886 Transport Type: 3 (TCP) 00:41:40.886 Address Family: 1 (IPv4) 00:41:40.886 Subsystem Type: 2 (NVM Subsystem) 00:41:40.886 Entry Flags: 00:41:40.886 Duplicate Returned Information: 0 00:41:40.886 Explicit Persistent Connection Support for Discovery: 0 00:41:40.886 Transport Requirements: 00:41:40.886 Secure Channel: Not Specified 00:41:40.886 Port ID: 1 (0x0001) 00:41:40.886 Controller ID: 65535 (0xffff) 00:41:40.886 Admin Max SQ Size: 32 00:41:40.886 Transport Service Identifier: 4420 00:41:40.886 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:41:40.886 Transport Address: 10.0.0.1 00:41:40.886 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:41.148 EAL: No free 2048 kB hugepages reported on node 1 00:41:41.148 get_feature(0x01) failed 00:41:41.148 get_feature(0x02) failed 00:41:41.148 get_feature(0x04) failed 00:41:41.148 ===================================================== 00:41:41.148 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:41.148 ===================================================== 00:41:41.148 Controller Capabilities/Features 00:41:41.148 ================================ 00:41:41.148 Vendor ID: 0000 00:41:41.148 Subsystem Vendor ID: 0000 00:41:41.148 Serial Number: eddc3cd53365168af832 00:41:41.148 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:41:41.148 Firmware Version: 6.7.0-68 00:41:41.148 Recommended Arb Burst: 6 00:41:41.148 IEEE OUI Identifier: 00 00 00 00:41:41.148 Multi-path I/O 00:41:41.148 May have multiple subsystem ports: Yes 00:41:41.148 May have multiple controllers: Yes 00:41:41.148 Associated with SR-IOV VF: No 00:41:41.148 Max Data Transfer Size: Unlimited 00:41:41.148 Max Number of Namespaces: 1024 00:41:41.148 Max Number of I/O Queues: 128 00:41:41.148 NVMe Specification Version (VS): 1.3 00:41:41.148 NVMe Specification Version (Identify): 1.3 00:41:41.148 Maximum Queue Entries: 1024 00:41:41.148 Contiguous Queues Required: No 00:41:41.148 Arbitration Mechanisms Supported 00:41:41.148 Weighted Round Robin: Not Supported 00:41:41.148 Vendor Specific: Not Supported 00:41:41.148 Reset Timeout: 7500 ms 00:41:41.148 Doorbell Stride: 4 bytes 00:41:41.148 NVM Subsystem Reset: Not Supported 00:41:41.148 Command Sets Supported 00:41:41.148 NVM Command Set: Supported 00:41:41.148 Boot Partition: Not Supported 00:41:41.148 Memory Page Size Minimum: 4096 bytes 00:41:41.148 Memory Page Size Maximum: 4096 bytes 00:41:41.148 Persistent Memory Region: Not Supported 00:41:41.148 Optional Asynchronous Events Supported 00:41:41.148 Namespace Attribute Notices: Supported 00:41:41.148 Firmware Activation Notices: Not Supported 00:41:41.148 ANA Change Notices: Supported 00:41:41.148 PLE Aggregate Log Change Notices: Not Supported 00:41:41.148 LBA Status Info Alert Notices: Not Supported 00:41:41.148 EGE Aggregate Log Change Notices: Not Supported 00:41:41.148 Normal NVM Subsystem Shutdown event: Not Supported 00:41:41.148 Zone Descriptor Change Notices: Not Supported 00:41:41.148 Discovery Log Change Notices: Not Supported 00:41:41.148 Controller Attributes 00:41:41.148 128-bit Host Identifier: Supported 00:41:41.148 Non-Operational Permissive Mode: Not Supported 00:41:41.148 NVM Sets: Not Supported 00:41:41.148 Read Recovery Levels: Not Supported 00:41:41.148 Endurance Groups: Not Supported 00:41:41.148 Predictable Latency Mode: Not Supported 00:41:41.148 Traffic Based Keep ALive: Supported 00:41:41.148 Namespace Granularity: Not Supported 00:41:41.148 SQ Associations: Not Supported 00:41:41.148 UUID List: Not Supported 00:41:41.148 Multi-Domain Subsystem: Not Supported 00:41:41.148 Fixed Capacity Management: Not Supported 00:41:41.148 Variable Capacity Management: Not Supported 00:41:41.148 Delete Endurance Group: Not Supported 00:41:41.148 Delete NVM Set: Not Supported 00:41:41.148 Extended LBA Formats Supported: Not Supported 00:41:41.148 Flexible Data Placement Supported: Not Supported 00:41:41.148 00:41:41.148 Controller Memory Buffer Support 00:41:41.148 ================================ 00:41:41.148 Supported: No 00:41:41.148 00:41:41.148 Persistent Memory Region Support 00:41:41.148 ================================ 00:41:41.148 Supported: No 00:41:41.148 00:41:41.148 Admin Command Set Attributes 00:41:41.148 ============================ 00:41:41.148 Security Send/Receive: Not Supported 00:41:41.148 Format NVM: Not Supported 00:41:41.148 Firmware Activate/Download: Not Supported 00:41:41.148 Namespace Management: Not Supported 00:41:41.148 Device Self-Test: Not Supported 00:41:41.148 Directives: Not Supported 00:41:41.148 NVMe-MI: Not Supported 00:41:41.148 Virtualization Management: Not Supported 00:41:41.148 Doorbell Buffer Config: Not Supported 00:41:41.149 Get LBA Status Capability: Not Supported 00:41:41.149 Command & Feature Lockdown Capability: Not Supported 00:41:41.149 Abort Command Limit: 4 00:41:41.149 Async Event Request Limit: 4 00:41:41.149 Number of Firmware Slots: N/A 00:41:41.149 Firmware Slot 1 Read-Only: N/A 00:41:41.149 Firmware Activation Without Reset: N/A 00:41:41.149 Multiple Update Detection Support: N/A 00:41:41.149 Firmware Update Granularity: No Information Provided 00:41:41.149 Per-Namespace SMART Log: Yes 00:41:41.149 Asymmetric Namespace Access Log Page: Supported 00:41:41.149 ANA Transition Time : 10 sec 00:41:41.149 00:41:41.149 Asymmetric Namespace Access Capabilities 00:41:41.149 ANA Optimized State : Supported 00:41:41.149 ANA Non-Optimized State : Supported 00:41:41.149 ANA Inaccessible State : Supported 00:41:41.149 ANA Persistent Loss State : Supported 00:41:41.149 ANA Change State : Supported 00:41:41.149 ANAGRPID is not changed : No 00:41:41.149 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:41:41.149 00:41:41.149 ANA Group Identifier Maximum : 128 00:41:41.149 Number of ANA Group Identifiers : 128 00:41:41.149 Max Number of Allowed Namespaces : 1024 00:41:41.149 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:41:41.149 Command Effects Log Page: Supported 00:41:41.149 Get Log Page Extended Data: Supported 00:41:41.149 Telemetry Log Pages: Not Supported 00:41:41.149 Persistent Event Log Pages: Not Supported 00:41:41.149 Supported Log Pages Log Page: May Support 00:41:41.149 Commands Supported & Effects Log Page: Not Supported 00:41:41.149 Feature Identifiers & Effects Log Page:May Support 00:41:41.149 NVMe-MI Commands & Effects Log Page: May Support 00:41:41.149 Data Area 4 for Telemetry Log: Not Supported 00:41:41.149 Error Log Page Entries Supported: 128 00:41:41.149 Keep Alive: Supported 00:41:41.149 Keep Alive Granularity: 1000 ms 00:41:41.149 00:41:41.149 NVM Command Set Attributes 00:41:41.149 ========================== 00:41:41.149 Submission Queue Entry Size 00:41:41.149 Max: 64 00:41:41.149 Min: 64 00:41:41.149 Completion Queue Entry Size 00:41:41.149 Max: 16 00:41:41.149 Min: 16 00:41:41.149 Number of Namespaces: 1024 00:41:41.149 Compare Command: Not Supported 00:41:41.149 Write Uncorrectable Command: Not Supported 00:41:41.149 Dataset Management Command: Supported 00:41:41.149 Write Zeroes Command: Supported 00:41:41.149 Set Features Save Field: Not Supported 00:41:41.149 Reservations: Not Supported 00:41:41.149 Timestamp: Not Supported 00:41:41.149 Copy: Not Supported 00:41:41.149 Volatile Write Cache: Present 00:41:41.149 Atomic Write Unit (Normal): 1 00:41:41.149 Atomic Write Unit (PFail): 1 00:41:41.149 Atomic Compare & Write Unit: 1 00:41:41.149 Fused Compare & Write: Not Supported 00:41:41.149 Scatter-Gather List 00:41:41.149 SGL Command Set: Supported 00:41:41.149 SGL Keyed: Not Supported 00:41:41.149 SGL Bit Bucket Descriptor: Not Supported 00:41:41.149 SGL Metadata Pointer: Not Supported 00:41:41.149 Oversized SGL: Not Supported 00:41:41.149 SGL Metadata Address: Not Supported 00:41:41.149 SGL Offset: Supported 00:41:41.149 Transport SGL Data Block: Not Supported 00:41:41.149 Replay Protected Memory Block: Not Supported 00:41:41.149 00:41:41.149 Firmware Slot Information 00:41:41.149 ========================= 00:41:41.149 Active slot: 0 00:41:41.149 00:41:41.149 Asymmetric Namespace Access 00:41:41.149 =========================== 00:41:41.149 Change Count : 0 00:41:41.149 Number of ANA Group Descriptors : 1 00:41:41.149 ANA Group Descriptor : 0 00:41:41.149 ANA Group ID : 1 00:41:41.149 Number of NSID Values : 1 00:41:41.149 Change Count : 0 00:41:41.149 ANA State : 1 00:41:41.149 Namespace Identifier : 1 00:41:41.149 00:41:41.149 Commands Supported and Effects 00:41:41.149 ============================== 00:41:41.149 Admin Commands 00:41:41.149 -------------- 00:41:41.149 Get Log Page (02h): Supported 00:41:41.149 Identify (06h): Supported 00:41:41.149 Abort (08h): Supported 00:41:41.149 Set Features (09h): Supported 00:41:41.149 Get Features (0Ah): Supported 00:41:41.149 Asynchronous Event Request (0Ch): Supported 00:41:41.149 Keep Alive (18h): Supported 00:41:41.149 I/O Commands 00:41:41.149 ------------ 00:41:41.149 Flush (00h): Supported 00:41:41.149 Write (01h): Supported LBA-Change 00:41:41.149 Read (02h): Supported 00:41:41.149 Write Zeroes (08h): Supported LBA-Change 00:41:41.149 Dataset Management (09h): Supported 00:41:41.149 00:41:41.149 Error Log 00:41:41.149 ========= 00:41:41.149 Entry: 0 00:41:41.149 Error Count: 0x3 00:41:41.149 Submission Queue Id: 0x0 00:41:41.149 Command Id: 0x5 00:41:41.149 Phase Bit: 0 00:41:41.149 Status Code: 0x2 00:41:41.149 Status Code Type: 0x0 00:41:41.149 Do Not Retry: 1 00:41:41.149 Error Location: 0x28 00:41:41.149 LBA: 0x0 00:41:41.149 Namespace: 0x0 00:41:41.149 Vendor Log Page: 0x0 00:41:41.149 ----------- 00:41:41.149 Entry: 1 00:41:41.149 Error Count: 0x2 00:41:41.149 Submission Queue Id: 0x0 00:41:41.149 Command Id: 0x5 00:41:41.149 Phase Bit: 0 00:41:41.149 Status Code: 0x2 00:41:41.149 Status Code Type: 0x0 00:41:41.149 Do Not Retry: 1 00:41:41.149 Error Location: 0x28 00:41:41.149 LBA: 0x0 00:41:41.149 Namespace: 0x0 00:41:41.149 Vendor Log Page: 0x0 00:41:41.149 ----------- 00:41:41.149 Entry: 2 00:41:41.149 Error Count: 0x1 00:41:41.149 Submission Queue Id: 0x0 00:41:41.149 Command Id: 0x4 00:41:41.149 Phase Bit: 0 00:41:41.149 Status Code: 0x2 00:41:41.149 Status Code Type: 0x0 00:41:41.149 Do Not Retry: 1 00:41:41.149 Error Location: 0x28 00:41:41.149 LBA: 0x0 00:41:41.149 Namespace: 0x0 00:41:41.149 Vendor Log Page: 0x0 00:41:41.149 00:41:41.149 Number of Queues 00:41:41.149 ================ 00:41:41.149 Number of I/O Submission Queues: 128 00:41:41.149 Number of I/O Completion Queues: 128 00:41:41.149 00:41:41.149 ZNS Specific Controller Data 00:41:41.149 ============================ 00:41:41.149 Zone Append Size Limit: 0 00:41:41.149 00:41:41.149 00:41:41.149 Active Namespaces 00:41:41.149 ================= 00:41:41.149 get_feature(0x05) failed 00:41:41.149 Namespace ID:1 00:41:41.149 Command Set Identifier: NVM (00h) 00:41:41.149 Deallocate: Supported 00:41:41.149 Deallocated/Unwritten Error: Not Supported 00:41:41.149 Deallocated Read Value: Unknown 00:41:41.149 Deallocate in Write Zeroes: Not Supported 00:41:41.149 Deallocated Guard Field: 0xFFFF 00:41:41.149 Flush: Supported 00:41:41.149 Reservation: Not Supported 00:41:41.149 Namespace Sharing Capabilities: Multiple Controllers 00:41:41.149 Size (in LBAs): 3750748848 (1788GiB) 00:41:41.149 Capacity (in LBAs): 3750748848 (1788GiB) 00:41:41.149 Utilization (in LBAs): 3750748848 (1788GiB) 00:41:41.149 UUID: e6561c5c-3deb-4853-b8b0-98479a4f71ca 00:41:41.149 Thin Provisioning: Not Supported 00:41:41.149 Per-NS Atomic Units: Yes 00:41:41.149 Atomic Write Unit (Normal): 8 00:41:41.149 Atomic Write Unit (PFail): 8 00:41:41.149 Preferred Write Granularity: 8 00:41:41.149 Atomic Compare & Write Unit: 8 00:41:41.149 Atomic Boundary Size (Normal): 0 00:41:41.149 Atomic Boundary Size (PFail): 0 00:41:41.149 Atomic Boundary Offset: 0 00:41:41.149 NGUID/EUI64 Never Reused: No 00:41:41.149 ANA group ID: 1 00:41:41.149 Namespace Write Protected: No 00:41:41.149 Number of LBA Formats: 1 00:41:41.149 Current LBA Format: LBA Format #00 00:41:41.149 LBA Format #00: Data Size: 512 Metadata Size: 0 00:41:41.149 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:41.149 rmmod nvme_tcp 00:41:41.149 rmmod nvme_fabrics 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:41.149 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:41.150 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.150 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:41.150 11:50:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:43.063 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:43.063 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:41:43.063 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:43.063 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:41:43.063 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:41:43.326 11:50:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:46.634 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:41:46.634 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:41:46.634 00:41:46.634 real 0m17.982s 00:41:46.634 user 0m4.620s 00:41:46.634 sys 0m10.485s 00:41:46.634 11:50:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:46.634 11:50:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.634 ************************************ 00:41:46.634 END TEST nvmf_identify_kernel_target 00:41:46.634 ************************************ 00:41:46.896 11:50:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:46.896 11:50:15 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:46.896 11:50:15 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:46.896 11:50:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:46.896 ************************************ 00:41:46.896 START TEST nvmf_auth_host 00:41:46.896 ************************************ 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:41:46.896 * Looking for test storage... 00:41:46.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:46.896 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:41:46.897 11:50:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:55.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:55.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:55.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:55.089 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:55.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:55.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:55.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.729 ms 00:41:55.090 00:41:55.090 --- 10.0.0.2 ping statistics --- 00:41:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.090 rtt min/avg/max/mdev = 0.729/0.729/0.729/0.000 ms 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:55.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:55.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:41:55.090 00:41:55.090 --- 10.0.0.1 ping statistics --- 00:41:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:55.090 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2485759 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2485759 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2485759 ']' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:55.090 11:50:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=27fd130ba8eb597dab56cd3a910b6bdd 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BVF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 27fd130ba8eb597dab56cd3a910b6bdd 0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 27fd130ba8eb597dab56cd3a910b6bdd 0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=27fd130ba8eb597dab56cd3a910b6bdd 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BVF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BVF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BVF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7a5c9db17ca00648c5fb038bcb2ef7ce73d004674313f232903c7f8cd517dc1a 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.enF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7a5c9db17ca00648c5fb038bcb2ef7ce73d004674313f232903c7f8cd517dc1a 3 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7a5c9db17ca00648c5fb038bcb2ef7ce73d004674313f232903c7f8cd517dc1a 3 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7a5c9db17ca00648c5fb038bcb2ef7ce73d004674313f232903c7f8cd517dc1a 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.enF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.enF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.enF 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=edbd3ba9525856a74c3af0074d4e6a9a31f8077eb4a86770 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u2d 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key edbd3ba9525856a74c3af0074d4e6a9a31f8077eb4a86770 0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 edbd3ba9525856a74c3af0074d4e6a9a31f8077eb4a86770 0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=edbd3ba9525856a74c3af0074d4e6a9a31f8077eb4a86770 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u2d 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u2d 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.u2d 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e71f34694431bf5c729f2785f35a6f98a8314b43a45cde8 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qca 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e71f34694431bf5c729f2785f35a6f98a8314b43a45cde8 2 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e71f34694431bf5c729f2785f35a6f98a8314b43a45cde8 2 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e71f34694431bf5c729f2785f35a6f98a8314b43a45cde8 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qca 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qca 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Qca 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c50f48512e82831db729c3d605b49443 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Mdj 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c50f48512e82831db729c3d605b49443 1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c50f48512e82831db729c3d605b49443 1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c50f48512e82831db729c3d605b49443 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Mdj 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Mdj 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Mdj 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d07d0dc9fd788a5d13d1c831184de7e 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.44X 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d07d0dc9fd788a5d13d1c831184de7e 1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d07d0dc9fd788a5d13d1c831184de7e 1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d07d0dc9fd788a5d13d1c831184de7e 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.44X 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.44X 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.44X 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ad8e886d6785dde02c84e09a113e3d6f78131ace4a7dce3 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ta7 00:41:55.090 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ad8e886d6785dde02c84e09a113e3d6f78131ace4a7dce3 2 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ad8e886d6785dde02c84e09a113e3d6f78131ace4a7dce3 2 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ad8e886d6785dde02c84e09a113e3d6f78131ace4a7dce3 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ta7 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ta7 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ta7 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ca2e8c19ea8333b334f697a5b111c5d7 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pTO 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ca2e8c19ea8333b334f697a5b111c5d7 0 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ca2e8c19ea8333b334f697a5b111c5d7 0 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ca2e8c19ea8333b334f697a5b111c5d7 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pTO 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pTO 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pTO 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0bf0435cd65f6966f41c1f4de3098000791e530f087d74888579098ae96d0318 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bel 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0bf0435cd65f6966f41c1f4de3098000791e530f087d74888579098ae96d0318 3 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0bf0435cd65f6966f41c1f4de3098000791e530f087d74888579098ae96d0318 3 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0bf0435cd65f6966f41c1f4de3098000791e530f087d74888579098ae96d0318 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bel 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bel 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bel 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2485759 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2485759 ']' 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:55.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:55.091 11:50:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BVF 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.enF ]] 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.enF 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.091 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.u2d 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Qca ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qca 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Mdj 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.44X ]] 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.44X 00:41:55.352 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ta7 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pTO ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pTO 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bel 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:41:55.353 11:50:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:58.655 Waiting for block devices as requested 00:41:58.655 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:58.655 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:58.655 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:58.655 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:58.655 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:58.655 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:58.915 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:58.915 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:58.915 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:59.177 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:59.177 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:59.177 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:59.438 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:59.438 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:59.438 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:59.698 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:59.698 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:00.271 No valid GPT data, bailing 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:42:00.271 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:42:00.532 00:42:00.532 Discovery Log Number of Records 2, Generation counter 2 00:42:00.532 =====Discovery Log Entry 0====== 00:42:00.532 trtype: tcp 00:42:00.532 adrfam: ipv4 00:42:00.532 subtype: current discovery subsystem 00:42:00.532 treq: not specified, sq flow control disable supported 00:42:00.532 portid: 1 00:42:00.532 trsvcid: 4420 00:42:00.532 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:00.532 traddr: 10.0.0.1 00:42:00.532 eflags: none 00:42:00.532 sectype: none 00:42:00.532 =====Discovery Log Entry 1====== 00:42:00.532 trtype: tcp 00:42:00.532 adrfam: ipv4 00:42:00.532 subtype: nvme subsystem 00:42:00.532 treq: not specified, sq flow control disable supported 00:42:00.532 portid: 1 00:42:00.532 trsvcid: 4420 00:42:00.532 subnqn: nqn.2024-02.io.spdk:cnode0 00:42:00.532 traddr: 10.0.0.1 00:42:00.532 eflags: none 00:42:00.532 sectype: none 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:42:00.532 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.533 nvme0n1 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:00.533 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.794 nvme0n1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:00.794 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.056 nvme0n1 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.056 11:50:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.318 nvme0n1 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.318 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 nvme0n1 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 nvme0n1 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.580 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.842 nvme0n1 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:01.842 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.104 11:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.104 nvme0n1 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.104 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:42:02.365 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.366 nvme0n1 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.366 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.628 nvme0n1 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.628 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.890 nvme0n1 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:02.890 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.152 11:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.413 nvme0n1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.413 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.674 nvme0n1 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.674 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.934 nvme0n1 00:42:03.934 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.934 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:03.934 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:03.934 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.934 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:03.935 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:03.935 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:03.935 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:03.935 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:03.935 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.196 11:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.458 nvme0n1 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.458 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.719 nvme0n1 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:04.719 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:04.720 11:50:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.292 nvme0n1 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:05.292 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.293 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.864 nvme0n1 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:05.864 11:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:05.865 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:05.865 11:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.126 nvme0n1 00:42:06.126 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.126 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.387 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.648 nvme0n1 00:42:06.648 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:06.910 11:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.171 nvme0n1 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.171 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:07.432 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.005 nvme0n1 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.005 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:08.266 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:08.267 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:08.267 11:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:08.267 11:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:08.267 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.267 11:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.838 nvme0n1 00:42:08.838 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.838 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:08.838 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.839 11:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.782 nvme0n1 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:09.782 11:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.355 nvme0n1 00:42:10.355 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.355 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:10.355 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:10.355 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.355 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:10.616 11:50:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:10.617 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.617 11:50:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.189 nvme0n1 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.189 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.450 nvme0n1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.450 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.451 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.712 nvme0n1 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.712 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.973 nvme0n1 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:11.973 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:11.974 11:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.235 nvme0n1 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.235 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.496 nvme0n1 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.496 nvme0n1 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.496 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.757 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:12.758 nvme0n1 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:12.758 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:13.018 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.019 nvme0n1 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.019 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.279 11:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.279 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.280 nvme0n1 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.280 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.540 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.540 nvme0n1 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.541 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:13.801 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:13.802 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.063 nvme0n1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.063 11:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.325 nvme0n1 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.325 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.585 nvme0n1 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.586 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:14.847 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.108 nvme0n1 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.108 11:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.369 nvme0n1 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.369 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.941 nvme0n1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.941 11:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.513 nvme0n1 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:16.513 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:16.514 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.124 nvme0n1 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.124 11:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.411 nvme0n1 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.411 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:17.671 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.672 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.932 nvme0n1 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:17.932 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:18.193 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.194 11:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.765 nvme0n1 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:18.765 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:18.766 11:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.709 nvme0n1 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:19.709 11:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:20.281 nvme0n1 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:20.281 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:20.541 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:20.542 11:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.114 nvme0n1 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.114 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.375 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.946 nvme0n1 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:21.946 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.207 11:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.207 nvme0n1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.207 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.468 nvme0n1 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.468 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.469 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.729 nvme0n1 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:22.729 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.730 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 nvme0n1 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 nvme0n1 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:22.991 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:23.252 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.253 11:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.253 nvme0n1 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.253 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:23.513 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.514 nvme0n1 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.514 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:23.774 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.775 nvme0n1 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:23.775 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:24.036 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.037 nvme0n1 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.037 11:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:24.297 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.298 nvme0n1 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.298 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.559 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.560 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.820 nvme0n1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:24.821 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.081 nvme0n1 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:25.081 11:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.081 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.341 nvme0n1 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.341 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:25.602 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.603 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.864 nvme0n1 00:42:25.864 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.864 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:25.864 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:25.864 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:25.865 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.127 nvme0n1 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.127 11:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.127 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.699 nvme0n1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.699 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.271 nvme0n1 00:42:27.272 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.272 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:27.272 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.272 11:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:27.272 11:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.272 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.844 nvme0n1 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.844 11:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.105 nvme0n1 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.105 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.366 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.626 nvme0n1 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.626 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjdmZDEzMGJhOGViNTk3ZGFiNTZjZDNhOTEwYjZiZGTQsDrb: 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: ]] 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2E1YzlkYjE3Y2EwMDY0OGM1ZmIwMzhiY2IyZWY3Y2U3M2QwMDQ2NzQzMTNmMjMyOTAzYzdmOGNkNTE3ZGMxYSrx/C4=: 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:28.887 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:28.888 11:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:29.459 nvme0n1 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.459 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:29.720 11:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.291 nvme0n1 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUwZjQ4NTEyZTgyODMxZGI3MjljM2Q2MDViNDk0NDPeKXbI: 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: ]] 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwN2QwZGM5ZmQ3ODhhNWQxM2QxYzgzMTE4NGRlN2Xx863s: 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:30.291 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:30.552 11:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.121 nvme0n1 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFkOGU4ODZkNjc4NWRkZTAyYzg0ZTA5YTExM2UzZDZmNzgxMzFhY2U0YTdkY2UzV4HJ8A==: 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2EyZThjMTllYTgzMzNiMzM0ZjY5N2E1YjExMWM1ZDf2p67w: 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:31.121 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:31.381 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:42:31.381 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.381 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.953 nvme0n1 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.953 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGJmMDQzNWNkNjVmNjk2NmY0MWMxZjRkZTMwOTgwMDA3OTFlNTMwZjA4N2Q3NDg4ODU3OTA5OGFlOTZkMDMxOIukNrE=: 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:31.954 11:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 nvme0n1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWRiZDNiYTk1MjU4NTZhNzRjM2FmMDA3NGQ0ZTZhOWEzMWY4MDc3ZWI0YTg2Nzcwo2fMEA==: 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU3MWYzNDY5NDQzMWJmNWM3MjlmMjc4NWYzNWE2Zjk4YTgzMTRiNDNhNDVjZGU4GScuqA==: 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 request: 00:42:32.895 { 00:42:32.895 "name": "nvme0", 00:42:32.895 "trtype": "tcp", 00:42:32.895 "traddr": "10.0.0.1", 00:42:32.895 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:32.895 "adrfam": "ipv4", 00:42:32.895 "trsvcid": "4420", 00:42:32.895 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:32.895 "method": "bdev_nvme_attach_controller", 00:42:32.895 "req_id": 1 00:42:32.895 } 00:42:32.895 Got JSON-RPC error response 00:42:32.895 response: 00:42:32.895 { 00:42:32.895 "code": -5, 00:42:32.895 "message": "Input/output error" 00:42:32.895 } 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:32.895 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:32.896 request: 00:42:32.896 { 00:42:32.896 "name": "nvme0", 00:42:32.896 "trtype": "tcp", 00:42:32.896 "traddr": "10.0.0.1", 00:42:32.896 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:32.896 "adrfam": "ipv4", 00:42:32.896 "trsvcid": "4420", 00:42:32.896 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:32.896 "dhchap_key": "key2", 00:42:32.896 "method": "bdev_nvme_attach_controller", 00:42:32.896 "req_id": 1 00:42:32.896 } 00:42:32.896 Got JSON-RPC error response 00:42:32.896 response: 00:42:32.896 { 00:42:32.896 "code": -5, 00:42:32.896 "message": "Input/output error" 00:42:32.896 } 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:42:32.896 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:33.157 request: 00:42:33.157 { 00:42:33.157 "name": "nvme0", 00:42:33.157 "trtype": "tcp", 00:42:33.157 "traddr": "10.0.0.1", 00:42:33.157 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:42:33.157 "adrfam": "ipv4", 00:42:33.157 "trsvcid": "4420", 00:42:33.157 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:42:33.157 "dhchap_key": "key1", 00:42:33.157 "dhchap_ctrlr_key": "ckey2", 00:42:33.157 "method": "bdev_nvme_attach_controller", 00:42:33.157 "req_id": 1 00:42:33.157 } 00:42:33.157 Got JSON-RPC error response 00:42:33.157 response: 00:42:33.157 { 00:42:33.157 "code": -5, 00:42:33.157 "message": "Input/output error" 00:42:33.157 } 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:33.157 11:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:33.157 rmmod nvme_tcp 00:42:33.157 rmmod nvme_fabrics 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2485759 ']' 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2485759 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 2485759 ']' 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 2485759 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2485759 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2485759' 00:42:33.157 killing process with pid 2485759 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 2485759 00:42:33.157 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 2485759 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:33.416 11:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:35.331 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:42:35.592 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:42:35.592 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:42:35.592 11:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:38.905 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:38.905 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:38.905 11:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BVF /tmp/spdk.key-null.u2d /tmp/spdk.key-sha256.Mdj /tmp/spdk.key-sha384.ta7 /tmp/spdk.key-sha512.bel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:42:38.905 11:51:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:42.210 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:42:42.210 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:42:42.210 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:42:42.472 00:42:42.472 real 0m55.580s 00:42:42.472 user 0m49.580s 00:42:42.472 sys 0m14.432s 00:42:42.472 11:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:42.472 11:51:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:42:42.472 ************************************ 00:42:42.472 END TEST nvmf_auth_host 00:42:42.472 ************************************ 00:42:42.472 11:51:11 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:42:42.472 11:51:11 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:42.472 11:51:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:42:42.472 11:51:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:42.472 11:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:42.472 ************************************ 00:42:42.472 START TEST nvmf_digest 00:42:42.472 ************************************ 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:42:42.472 * Looking for test storage... 00:42:42.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:42.472 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.803 11:51:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:42:42.804 11:51:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:49.393 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:49.394 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:49.394 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:49.394 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:49.394 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:49.394 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:49.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:49.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:42:49.656 00:42:49.656 --- 10.0.0.2 ping statistics --- 00:42:49.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.656 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:49.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:49.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:42:49.656 00:42:49.656 --- 10.0.0.1 ping statistics --- 00:42:49.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.656 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:42:49.656 11:51:18 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:42:49.657 ************************************ 00:42:49.657 START TEST nvmf_digest_clean 00:42:49.657 ************************************ 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2502397 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2502397 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2502397 ']' 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:49.657 11:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:42:49.657 [2024-06-10 11:51:18.624659] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:42:49.657 [2024-06-10 11:51:18.624717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:49.918 EAL: No free 2048 kB hugepages reported on node 1 00:42:49.918 [2024-06-10 11:51:18.684815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:49.918 [2024-06-10 11:51:18.751044] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:49.918 [2024-06-10 11:51:18.751077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:49.918 [2024-06-10 11:51:18.751085] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:49.918 [2024-06-10 11:51:18.751092] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:49.918 [2024-06-10 11:51:18.751098] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:49.918 [2024-06-10 11:51:18.751114] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.490 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:50.490 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:42:50.490 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:50.490 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:50.490 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:50.751 null0 00:42:50.751 [2024-06-10 11:51:19.557701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:50.751 [2024-06-10 11:51:19.581905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:50.751 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2502603 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2502603 /var/tmp/bperf.sock 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2502603 ']' 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:50.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:50.752 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:50.752 [2024-06-10 11:51:19.634768] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:42:50.752 [2024-06-10 11:51:19.634816] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502603 ] 00:42:50.752 EAL: No free 2048 kB hugepages reported on node 1 00:42:50.752 [2024-06-10 11:51:19.692811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.011 [2024-06-10 11:51:19.756493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:51.011 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:51.011 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:42:51.011 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:51.011 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:51.011 11:51:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:51.271 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:51.271 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:51.532 nvme0n1 00:42:51.532 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:51.532 11:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:51.532 Running I/O for 2 seconds... 00:42:54.079 00:42:54.079 Latency(us) 00:42:54.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.079 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:54.079 nvme0n1 : 2.00 20714.40 80.92 0.00 0.00 6170.84 3249.49 21080.75 00:42:54.079 =================================================================================================================== 00:42:54.079 Total : 20714.40 80.92 0.00 0.00 6170.84 3249.49 21080.75 00:42:54.079 0 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:54.079 | select(.opcode=="crc32c") 00:42:54.079 | "\(.module_name) \(.executed)"' 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2502603 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2502603 ']' 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2502603 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2502603 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2502603' 00:42:54.079 killing process with pid 2502603 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2502603 00:42:54.079 Received shutdown signal, test time was about 2.000000 seconds 00:42:54.079 00:42:54.079 Latency(us) 00:42:54.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.079 =================================================================================================================== 00:42:54.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2502603 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2503277 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2503277 /var/tmp/bperf.sock 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2503277 ']' 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:54.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:54.079 11:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:54.079 [2024-06-10 11:51:22.906350] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:42:54.079 [2024-06-10 11:51:22.906406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503277 ] 00:42:54.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:54.079 Zero copy mechanism will not be used. 00:42:54.079 EAL: No free 2048 kB hugepages reported on node 1 00:42:54.079 [2024-06-10 11:51:22.965324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:54.079 [2024-06-10 11:51:23.027044] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:54.340 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:54.913 nvme0n1 00:42:54.913 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:54.913 11:51:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:54.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:42:54.913 Zero copy mechanism will not be used. 00:42:54.913 Running I/O for 2 seconds... 00:42:56.829 00:42:56.829 Latency(us) 00:42:56.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:56.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:42:56.829 nvme0n1 : 2.00 2887.13 360.89 0.00 0.00 5538.81 3003.73 9065.81 00:42:56.829 =================================================================================================================== 00:42:56.829 Total : 2887.13 360.89 0.00 0.00 5538.81 3003.73 9065.81 00:42:56.829 0 00:42:56.829 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:42:56.829 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:42:56.829 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:42:56.829 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:42:56.829 | select(.opcode=="crc32c") 00:42:56.829 | "\(.module_name) \(.executed)"' 00:42:56.829 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2503277 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2503277 ']' 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2503277 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:57.091 11:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2503277 00:42:57.091 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:57.091 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:57.091 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2503277' 00:42:57.091 killing process with pid 2503277 00:42:57.091 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2503277 00:42:57.091 Received shutdown signal, test time was about 2.000000 seconds 00:42:57.091 00:42:57.091 Latency(us) 00:42:57.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:57.091 =================================================================================================================== 00:42:57.091 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:57.091 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2503277 00:42:57.352 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:42:57.352 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:42:57.352 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:42:57.352 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:42:57.352 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2503961 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2503961 /var/tmp/bperf.sock 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2503961 ']' 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:57.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:42:57.353 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:42:57.353 [2024-06-10 11:51:26.214864] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:42:57.353 [2024-06-10 11:51:26.214921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503961 ] 00:42:57.353 EAL: No free 2048 kB hugepages reported on node 1 00:42:57.353 [2024-06-10 11:51:26.272040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:57.614 [2024-06-10 11:51:26.334960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:57.614 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:57.614 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:42:57.614 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:42:57.614 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:42:57.614 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:57.875 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:57.875 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:42:58.135 nvme0n1 00:42:58.135 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:42:58.135 11:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:58.135 Running I/O for 2 seconds... 00:43:00.683 00:43:00.683 Latency(us) 00:43:00.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:00.683 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:00.683 nvme0n1 : 2.01 22007.73 85.97 0.00 0.00 5806.95 3058.35 15182.51 00:43:00.683 =================================================================================================================== 00:43:00.683 Total : 22007.73 85.97 0.00 0.00 5806.95 3058.35 15182.51 00:43:00.683 0 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:00.683 | select(.opcode=="crc32c") 00:43:00.683 | "\(.module_name) \(.executed)"' 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2503961 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2503961 ']' 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2503961 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2503961 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2503961' 00:43:00.683 killing process with pid 2503961 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2503961 00:43:00.683 Received shutdown signal, test time was about 2.000000 seconds 00:43:00.683 00:43:00.683 Latency(us) 00:43:00.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:00.683 =================================================================================================================== 00:43:00.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2503961 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2504596 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2504596 /var/tmp/bperf.sock 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2504596 ']' 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:00.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:00.683 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:43:00.683 [2024-06-10 11:51:29.501332] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:00.683 [2024-06-10 11:51:29.501389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504596 ] 00:43:00.683 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:00.683 Zero copy mechanism will not be used. 00:43:00.683 EAL: No free 2048 kB hugepages reported on node 1 00:43:00.683 [2024-06-10 11:51:29.559053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:00.683 [2024-06-10 11:51:29.622285] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.944 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:00.944 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:43:00.944 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:43:00.944 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:43:00.944 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:01.205 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:01.205 11:51:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:01.464 nvme0n1 00:43:01.464 11:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:43:01.464 11:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:01.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:01.464 Zero copy mechanism will not be used. 00:43:01.464 Running I/O for 2 seconds... 00:43:03.379 00:43:03.379 Latency(us) 00:43:03.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.379 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:03.379 nvme0n1 : 2.00 4615.93 576.99 0.00 0.00 3460.62 1611.09 13052.59 00:43:03.379 =================================================================================================================== 00:43:03.379 Total : 4615.93 576.99 0.00 0.00 3460.62 1611.09 13052.59 00:43:03.379 0 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:43:03.638 | select(.opcode=="crc32c") 00:43:03.638 | "\(.module_name) \(.executed)"' 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2504596 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2504596 ']' 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2504596 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:03.638 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2504596 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2504596' 00:43:03.898 killing process with pid 2504596 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2504596 00:43:03.898 Received shutdown signal, test time was about 2.000000 seconds 00:43:03.898 00:43:03.898 Latency(us) 00:43:03.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.898 =================================================================================================================== 00:43:03.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2504596 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2502397 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2502397 ']' 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2502397 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2502397 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2502397' 00:43:03.898 killing process with pid 2502397 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2502397 00:43:03.898 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2502397 00:43:04.159 00:43:04.159 real 0m14.364s 00:43:04.159 user 0m28.157s 00:43:04.159 sys 0m3.287s 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:43:04.159 ************************************ 00:43:04.159 END TEST nvmf_digest_clean 00:43:04.159 ************************************ 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:04.159 11:51:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:04.159 ************************************ 00:43:04.159 START TEST nvmf_digest_error 00:43:04.159 ************************************ 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2505334 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2505334 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2505334 ']' 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:04.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:04.159 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:43:04.159 [2024-06-10 11:51:33.083390] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:04.159 [2024-06-10 11:51:33.083441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:04.159 EAL: No free 2048 kB hugepages reported on node 1 00:43:04.419 [2024-06-10 11:51:33.149311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.420 [2024-06-10 11:51:33.218149] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:04.420 [2024-06-10 11:51:33.218185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:04.420 [2024-06-10 11:51:33.218192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:04.420 [2024-06-10 11:51:33.218198] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:04.420 [2024-06-10 11:51:33.218207] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:04.420 [2024-06-10 11:51:33.218232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:04.990 [2024-06-10 11:51:33.924235] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:04.990 11:51:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:05.253 null0 00:43:05.253 [2024-06-10 11:51:34.000959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.253 [2024-06-10 11:51:34.025155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2505372 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2505372 /var/tmp/bperf.sock 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2505372 ']' 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:05.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:05.253 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:43:05.253 [2024-06-10 11:51:34.077187] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:05.253 [2024-06-10 11:51:34.077235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505372 ] 00:43:05.253 EAL: No free 2048 kB hugepages reported on node 1 00:43:05.253 [2024-06-10 11:51:34.135588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.253 [2024-06-10 11:51:34.199880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:05.515 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:05.776 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:05.776 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:05.776 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:06.037 nvme0n1 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:06.037 11:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:06.297 Running I/O for 2 seconds... 00:43:06.297 [2024-06-10 11:51:35.060854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.297 [2024-06-10 11:51:35.060889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.297 [2024-06-10 11:51:35.060901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.297 [2024-06-10 11:51:35.073873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.297 [2024-06-10 11:51:35.073898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.297 [2024-06-10 11:51:35.073907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.297 [2024-06-10 11:51:35.086859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.297 [2024-06-10 11:51:35.086882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.297 [2024-06-10 11:51:35.086891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.297 [2024-06-10 11:51:35.098252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.297 [2024-06-10 11:51:35.098274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.098287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.110779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.110801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.110810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.124637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.124659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.124668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.136168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.136189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.136198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.149268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.149289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.149298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.161791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.161812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.161821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.174770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.174790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.174799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.185580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.185602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.185611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.199836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.199858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.199867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.213937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.213962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.213971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.225022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.225042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.239558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.239579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.239587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.251515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.251536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.251545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.298 [2024-06-10 11:51:35.264594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.298 [2024-06-10 11:51:35.264615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.298 [2024-06-10 11:51:35.264624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.277174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.277194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.277203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.288725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.288745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.288753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.301581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.301602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.301611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.313552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.313573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.313582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.328158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.328181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.328192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.338953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.338974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.338983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.352303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.352324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.352332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.365944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.365965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.365974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.377742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.377763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.377772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.390160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.390181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.390191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.404104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.404125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.404134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.414761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.414782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.414791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.427492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.427514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.427527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.440896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.440917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.440926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.452399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.452420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.452429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.465114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.465135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.465143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.478704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.478725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.478733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.489165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.489186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.502955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.502976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.502984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.513967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.513987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.513996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.559 [2024-06-10 11:51:35.527429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.559 [2024-06-10 11:51:35.527450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.559 [2024-06-10 11:51:35.527459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.541905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.541930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.541939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.553503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.553525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.553533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.567373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.567395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.567404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.578958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.578979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.578988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.592834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.592855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.592863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.604112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.604134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.604143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.616607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.616629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.616638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.630266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.820 [2024-06-10 11:51:35.630286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.820 [2024-06-10 11:51:35.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.820 [2024-06-10 11:51:35.642763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.642784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.654814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.654834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.654842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.666703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.666724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.666733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.680570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.680590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.680599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.692593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.692614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.703991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.704012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.704020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.717572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.717592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.717600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.731255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.731284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.741939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.741960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.741969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.755592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.755613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.769211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.769232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.769242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:06.821 [2024-06-10 11:51:35.780992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:06.821 [2024-06-10 11:51:35.781012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.821 [2024-06-10 11:51:35.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.794569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.794591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.794599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.805773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.805795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.805803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.819113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.819134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.819143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.831436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.831457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.831466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.844039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.844060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.844069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.856644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.856665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.856679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.868167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.868188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.868197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.082 [2024-06-10 11:51:35.882595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.082 [2024-06-10 11:51:35.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.082 [2024-06-10 11:51:35.882625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.893436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.893457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.893466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.906536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.906558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.906567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.919382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.919402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.919411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.930194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.930216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.930224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.943985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.944006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.944014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.956767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.956787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.956796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.969032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.969053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.969065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.981257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.981278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.981287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:35.993301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:35.993322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:35.993331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:36.006354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:36.006376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:36.006385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:36.018908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:36.018929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:36.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:36.032446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:36.032467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:36.032476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.083 [2024-06-10 11:51:36.044494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.083 [2024-06-10 11:51:36.044516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.083 [2024-06-10 11:51:36.044525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.056114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.056136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.056145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.068520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.068542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.068551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.082106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.082131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.082140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.092173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.092195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.092204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.106693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.106724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.120824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.120846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.120855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.134065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.134087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.134096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.145649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.145676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.145686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.157626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.157656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.171073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.171094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.171102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.184138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.184160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.344 [2024-06-10 11:51:36.184169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.344 [2024-06-10 11:51:36.195902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.344 [2024-06-10 11:51:36.195923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.195932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.210042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.210063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.220518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.220540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.234360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.234381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.234390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.247011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.247032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.247041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.260160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.260182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.260190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.271085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.271106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.271115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.284862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.284884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.284893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.298553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.298575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.298587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.345 [2024-06-10 11:51:36.309726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.345 [2024-06-10 11:51:36.309748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.345 [2024-06-10 11:51:36.309757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.322851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.322873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.322882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.336281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.336302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.336311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.347907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.347929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.347939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.360694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.360716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.360724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.372468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.372490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.386585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.386606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.386614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.397698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.397719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.410802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.410830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.410839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.423637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.423658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.423666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.435895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.435917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.435925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.448284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.448304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.448314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.459584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.459605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.459615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.472092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.472114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.472122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.485754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.485775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.485784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.496448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.496469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.496478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.511196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.511217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.511226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.523514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.606 [2024-06-10 11:51:36.523535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.606 [2024-06-10 11:51:36.523544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.606 [2024-06-10 11:51:36.536315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.607 [2024-06-10 11:51:36.536336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.607 [2024-06-10 11:51:36.536345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.607 [2024-06-10 11:51:36.547259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.607 [2024-06-10 11:51:36.547280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.607 [2024-06-10 11:51:36.547289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.607 [2024-06-10 11:51:36.561062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.607 [2024-06-10 11:51:36.561084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.607 [2024-06-10 11:51:36.561092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.607 [2024-06-10 11:51:36.575601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.607 [2024-06-10 11:51:36.575622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.607 [2024-06-10 11:51:36.575631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.867 [2024-06-10 11:51:36.586411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.867 [2024-06-10 11:51:36.586431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.867 [2024-06-10 11:51:36.586440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.867 [2024-06-10 11:51:36.600194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.600215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.600225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.613822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.613843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.613852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.624382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.624403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.624416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.638522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.638543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.638552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.649290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.649311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.649320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.663849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.663870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.663879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.675321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.675342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.675351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.688563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.688584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.688593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.701343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.701364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.701372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.714237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.714258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.714267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.726965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.726986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.726995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.737980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.738005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.738014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.751318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.751348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.763850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.763879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.776731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.776752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.776761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.787858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.787880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.787889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.799530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.799551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.799560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.813708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.813729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.813738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:07.868 [2024-06-10 11:51:36.825486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:07.868 [2024-06-10 11:51:36.825507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:07.868 [2024-06-10 11:51:36.825516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.838650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.838675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.838688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.850124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.850154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.863163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.863183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.863193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.876453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.876474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.876483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.888536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.888558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.888567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.901174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.901195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.914044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.914066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.914075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.924970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.924992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.925001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.939304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.130 [2024-06-10 11:51:36.939324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.130 [2024-06-10 11:51:36.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.130 [2024-06-10 11:51:36.950012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:36.950038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:36.950047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:36.964578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:36.964599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:36.964607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:36.977624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:36.977644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:36.977654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:36.988561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:36.988582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:36.988591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:37.001268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:37.001289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:37.001298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:37.012763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:37.012784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:37.012793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:37.025759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:37.025780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:37.025789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 [2024-06-10 11:51:37.039835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x69a4a0) 00:43:08.131 [2024-06-10 11:51:37.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:08.131 [2024-06-10 11:51:37.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:08.131 00:43:08.131 Latency(us) 00:43:08.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.131 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:08.131 nvme0n1 : 2.01 20208.65 78.94 0.00 0.00 6326.53 3686.40 16711.68 00:43:08.131 =================================================================================================================== 00:43:08.131 Total : 20208.65 78.94 0.00 0.00 6326.53 3686.40 16711.68 00:43:08.131 0 00:43:08.131 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:08.131 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:08.131 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:08.131 | .driver_specific 00:43:08.131 | .nvme_error 00:43:08.131 | .status_code 00:43:08.131 | .command_transient_transport_error' 00:43:08.131 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2505372 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2505372 ']' 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2505372 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2505372 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2505372' 00:43:08.392 killing process with pid 2505372 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2505372 00:43:08.392 Received shutdown signal, test time was about 2.000000 seconds 00:43:08.392 00:43:08.392 Latency(us) 00:43:08.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.392 =================================================================================================================== 00:43:08.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:08.392 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2505372 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2506053 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2506053 /var/tmp/bperf.sock 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2506053 ']' 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:08.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:08.654 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:43:08.654 [2024-06-10 11:51:37.494818] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:08.654 [2024-06-10 11:51:37.494878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506053 ] 00:43:08.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:08.654 Zero copy mechanism will not be used. 00:43:08.654 EAL: No free 2048 kB hugepages reported on node 1 00:43:08.654 [2024-06-10 11:51:37.552626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.654 [2024-06-10 11:51:37.616035] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:08.915 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:08.915 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:43:08.915 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:08.915 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.176 11:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:09.437 nvme0n1 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:09.437 11:51:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:09.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:09.437 Zero copy mechanism will not be used. 00:43:09.437 Running I/O for 2 seconds... 00:43:09.699 [2024-06-10 11:51:38.415880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.415918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.415930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.429704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.429730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.429740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.443591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.443615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.443625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.457097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.457120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.457129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.471222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.471244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.471253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.485576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.485598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.485607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.501169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.501192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.501201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.516106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.516128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.516137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.530409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.530432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.530441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.544353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.544376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.544386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.558368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.558390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.558403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.572463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.572484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.572493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.586386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.586408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.586417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.600590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.699 [2024-06-10 11:51:38.600611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.699 [2024-06-10 11:51:38.600620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.699 [2024-06-10 11:51:38.614877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.700 [2024-06-10 11:51:38.614899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.700 [2024-06-10 11:51:38.614908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.700 [2024-06-10 11:51:38.629041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.700 [2024-06-10 11:51:38.629063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.700 [2024-06-10 11:51:38.629071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.700 [2024-06-10 11:51:38.643299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.700 [2024-06-10 11:51:38.643322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.700 [2024-06-10 11:51:38.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.700 [2024-06-10 11:51:38.657535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.700 [2024-06-10 11:51:38.657558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.700 [2024-06-10 11:51:38.657567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.671412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.961 [2024-06-10 11:51:38.671435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.961 [2024-06-10 11:51:38.671444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.685486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.961 [2024-06-10 11:51:38.685508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.961 [2024-06-10 11:51:38.685517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.699833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.961 [2024-06-10 11:51:38.699855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.961 [2024-06-10 11:51:38.699864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.713278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.961 [2024-06-10 11:51:38.713300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.961 [2024-06-10 11:51:38.713309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.727101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.961 [2024-06-10 11:51:38.727124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.961 [2024-06-10 11:51:38.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.961 [2024-06-10 11:51:38.741475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.741498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.741506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.755844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.755866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.755876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.770466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.770488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.770497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.784724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.784746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.784755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.798755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.798777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.798790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.812961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.812984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.812993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.826182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.826213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.836920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.836943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.836951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.849788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.849820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.860907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.860929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.860938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.870325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.870355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.881280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.881303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.881312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.893097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.893119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.893128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.904412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.904439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.904448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.917550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.917572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.917580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:09.962 [2024-06-10 11:51:38.932110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:09.962 [2024-06-10 11:51:38.932133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:09.962 [2024-06-10 11:51:38.932142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:38.945351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:38.945374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:38.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:38.959437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:38.959459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:38.959468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:38.973479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:38.973501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:38.973510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:38.987808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:38.987830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:38.987838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.000549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.000571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.000580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.014911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.014933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.014942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.029404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.029427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.029435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.042047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.042069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.042078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.052127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.052149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.052158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.063240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.063262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.063270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.074753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.074775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.074784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.086612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.086635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.086643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.098441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.098463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.098472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.109573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.109595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.109604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.122263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.122285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.122298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.134269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.134300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.146236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.146258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.146267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.156684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.156714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.166946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.166969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.166977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.178797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.178819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.178828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.193041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.193063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.193072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.206153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.206175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.206184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.219545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.219567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.287 [2024-06-10 11:51:39.219575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.287 [2024-06-10 11:51:39.231583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.287 [2024-06-10 11:51:39.231612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.288 [2024-06-10 11:51:39.231621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.550 [2024-06-10 11:51:39.244282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.550 [2024-06-10 11:51:39.244304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.244313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.255757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.255779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.255787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.266808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.266840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.278803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.278826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.278835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.290476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.290498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.303147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.303169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.303178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.315747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.315770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.315778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.327150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.327173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.327182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.342203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.342226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.342235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.356324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.356347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.367543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.367566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.367575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.379630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.379654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.379662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.391263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.391286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.391294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.404479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.404503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.414653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.414680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.414690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.423353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.423376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.423384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.434585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.434621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.445736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.445759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.445768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.457946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.457969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.457977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.467134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.467157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.467166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.479231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.479253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.479262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.490023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.490046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.490055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.501636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.501659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.501668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.551 [2024-06-10 11:51:39.514279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.551 [2024-06-10 11:51:39.514302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.551 [2024-06-10 11:51:39.514311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.814 [2024-06-10 11:51:39.525069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.814 [2024-06-10 11:51:39.525092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.814 [2024-06-10 11:51:39.525101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.814 [2024-06-10 11:51:39.535616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.814 [2024-06-10 11:51:39.535638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.814 [2024-06-10 11:51:39.535647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.814 [2024-06-10 11:51:39.544876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.544899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.544907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.556601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.556624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.556632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.566864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.566888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.566897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.578452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.578476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.578485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.589419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.589442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.589451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.598947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.598970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.598978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.610391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.610414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.610423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.621378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.621401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.621413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.633707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.633730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.633739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.645161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.645184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.645193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.656749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.656772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.656780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.669546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.669568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.669577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.681392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.681415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.681424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.691336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.691358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.691367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.703137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.703160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.703168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.714403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.714425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.714434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.725240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.725267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.725276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.736256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.736279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.736288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.748326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.748348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.748357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.756600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.756623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.756631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.767357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.767380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.767389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:10.815 [2024-06-10 11:51:39.779711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:10.815 [2024-06-10 11:51:39.779734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:10.815 [2024-06-10 11:51:39.779743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.790291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.790314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.790323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.802002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.802024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.802034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.813267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.813289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.823323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.823346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.832864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.832887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.842247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.842269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.842278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.851270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.851292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.859643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.859665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.859680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.867552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.867574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.867583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.874935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.874957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.874966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.882150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.882172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.882181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.889782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.889804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.889816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.897539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.897560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.897569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.905248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.905270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.905278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.912812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.912835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.912843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.922329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.922351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.922360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.934209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.934231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.934240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.945507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.945530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.945539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.957513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.957536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.957545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.968806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.968828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.968836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.981097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.981123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.981132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:39.991499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:39.991521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:39.991530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:40.001275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:40.001298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:40.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:40.011808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:40.011831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:40.011840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:40.021252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:40.021275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:40.021284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:40.031715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.078 [2024-06-10 11:51:40.031737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.078 [2024-06-10 11:51:40.031746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.078 [2024-06-10 11:51:40.041499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.079 [2024-06-10 11:51:40.041521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.079 [2024-06-10 11:51:40.041530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.051279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.051302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.051311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.061237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.061260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.061268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.073434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.073456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.086439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.086461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.086469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.099027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.099050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.099059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.110250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.110272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.121018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.121041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.121050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.133322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.133344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.133353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.143280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.143302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.143310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.154946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.154969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.154977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.166532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.166558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.166567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.177935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.177956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.177965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.188796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.188819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.200673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.340 [2024-06-10 11:51:40.200697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.340 [2024-06-10 11:51:40.200707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.340 [2024-06-10 11:51:40.211448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.211470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.211479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.223399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.223421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.223430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.234496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.234527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.245906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.245927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.245936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.256048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.256070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.256079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.265887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.265910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.265918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.276805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.276828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.276836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.288127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.288149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.288158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.298705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.298727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.298736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.341 [2024-06-10 11:51:40.309637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.341 [2024-06-10 11:51:40.309660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.341 [2024-06-10 11:51:40.309668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.602 [2024-06-10 11:51:40.319780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.602 [2024-06-10 11:51:40.319803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.602 [2024-06-10 11:51:40.319811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.602 [2024-06-10 11:51:40.330844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.602 [2024-06-10 11:51:40.330866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.602 [2024-06-10 11:51:40.330875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.602 [2024-06-10 11:51:40.340234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.602 [2024-06-10 11:51:40.340255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.602 [2024-06-10 11:51:40.340264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.351799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.351822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.351834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.364245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.364267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.364276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.375643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.375678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.386118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.386149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.396571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.396594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.396602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:11.603 [2024-06-10 11:51:40.404945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34400) 00:43:11.603 [2024-06-10 11:51:40.404966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:11.603 [2024-06-10 11:51:40.404975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:11.603 00:43:11.603 Latency(us) 00:43:11.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:43:11.603 nvme0n1 : 2.00 2651.72 331.46 0.00 0.00 6027.93 1447.25 15510.19 00:43:11.603 =================================================================================================================== 00:43:11.603 Total : 2651.72 331.46 0.00 0.00 6027.93 1447.25 15510.19 00:43:11.603 0 00:43:11.603 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:11.603 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:11.603 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:11.603 | .driver_specific 00:43:11.603 | .nvme_error 00:43:11.603 | .status_code 00:43:11.603 | .command_transient_transport_error' 00:43:11.603 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2506053 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2506053 ']' 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2506053 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2506053 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2506053' 00:43:11.865 killing process with pid 2506053 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2506053 00:43:11.865 Received shutdown signal, test time was about 2.000000 seconds 00:43:11.865 00:43:11.865 Latency(us) 00:43:11.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.865 =================================================================================================================== 00:43:11.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2506053 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2506731 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2506731 /var/tmp/bperf.sock 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2506731 ']' 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:11.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:11.865 11:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:12.126 [2024-06-10 11:51:40.880413] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:12.126 [2024-06-10 11:51:40.880474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506731 ] 00:43:12.126 EAL: No free 2048 kB hugepages reported on node 1 00:43:12.126 [2024-06-10 11:51:40.940524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:12.126 [2024-06-10 11:51:41.004190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:12.126 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:12.126 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:43:12.126 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:12.126 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:12.387 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:12.648 nvme0n1 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:12.648 11:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:12.909 Running I/O for 2 seconds... 00:43:12.909 [2024-06-10 11:51:41.697336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f9f68 00:43:12.909 [2024-06-10 11:51:41.698281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.909 [2024-06-10 11:51:41.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:12.909 [2024-06-10 11:51:41.709742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190fb048 00:43:12.909 [2024-06-10 11:51:41.710567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.909 [2024-06-10 11:51:41.710589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.909 [2024-06-10 11:51:41.721945] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:12.910 [2024-06-10 11:51:41.722999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.723021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.733772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:12.910 [2024-06-10 11:51:41.734861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.734881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.745570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:12.910 [2024-06-10 11:51:41.746661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.746690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.757387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:12.910 [2024-06-10 11:51:41.758473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.758494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.769193] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:12.910 [2024-06-10 11:51:41.770280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.770301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.780968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:12.910 [2024-06-10 11:51:41.782012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.782031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.792791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:12.910 [2024-06-10 11:51:41.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.793884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.804564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:12.910 [2024-06-10 11:51:41.805647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.816356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:12.910 [2024-06-10 11:51:41.817445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.817465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.828120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:12.910 [2024-06-10 11:51:41.829212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.829231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.839867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:12.910 [2024-06-10 11:51:41.840977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.840996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.851623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:12.910 [2024-06-10 11:51:41.852710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.852730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.863385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:12.910 [2024-06-10 11:51:41.864472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.864491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:12.910 [2024-06-10 11:51:41.875166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:12.910 [2024-06-10 11:51:41.876221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:12.910 [2024-06-10 11:51:41.876240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.886909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.172 [2024-06-10 11:51:41.887972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.887992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.898697] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.172 [2024-06-10 11:51:41.899779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.899799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.910453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.172 [2024-06-10 11:51:41.911539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.911558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.922194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.172 [2024-06-10 11:51:41.923291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.923310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.933937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.172 [2024-06-10 11:51:41.935037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.935056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.945711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.172 [2024-06-10 11:51:41.946776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.946795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.957468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.172 [2024-06-10 11:51:41.958556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.958576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.969232] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.172 [2024-06-10 11:51:41.970316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.970335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.980977] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.172 [2024-06-10 11:51:41.982023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.172 [2024-06-10 11:51:41.982043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.172 [2024-06-10 11:51:41.992760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.172 [2024-06-10 11:51:41.993839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:41.993859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.004527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.173 [2024-06-10 11:51:42.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.005632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.016316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.173 [2024-06-10 11:51:42.017400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.017420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.028097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.173 [2024-06-10 11:51:42.029185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.039873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.173 [2024-06-10 11:51:42.040916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.040936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.051639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.173 [2024-06-10 11:51:42.052728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.052750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.063392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.173 [2024-06-10 11:51:42.064480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.064500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.075160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.173 [2024-06-10 11:51:42.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.076257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.087050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.173 [2024-06-10 11:51:42.088133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.088153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.098846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.173 [2024-06-10 11:51:42.099923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.099942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.110622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.173 [2024-06-10 11:51:42.111679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.111698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.122369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.173 [2024-06-10 11:51:42.123465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.123485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.173 [2024-06-10 11:51:42.134101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.173 [2024-06-10 11:51:42.135188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.173 [2024-06-10 11:51:42.135207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.145954] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.434 [2024-06-10 11:51:42.147016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.147035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.157704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.434 [2024-06-10 11:51:42.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.158796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.169471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.434 [2024-06-10 11:51:42.170520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.170541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.181224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.434 [2024-06-10 11:51:42.182305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.182324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.193206] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.434 [2024-06-10 11:51:42.194302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.194321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.204978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.434 [2024-06-10 11:51:42.206058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.206078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.216749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.434 [2024-06-10 11:51:42.217854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.217873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.228497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.434 [2024-06-10 11:51:42.229579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.229598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.240241] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.434 [2024-06-10 11:51:42.241330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.241349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.252005] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.434 [2024-06-10 11:51:42.253105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.253124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.263743] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.434 [2024-06-10 11:51:42.264790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.264809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.275508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.434 [2024-06-10 11:51:42.276596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.276615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.287279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.434 [2024-06-10 11:51:42.288359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.288379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.299071] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.434 [2024-06-10 11:51:42.300170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.434 [2024-06-10 11:51:42.300189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.434 [2024-06-10 11:51:42.310805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.434 [2024-06-10 11:51:42.311900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.311919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.322556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.435 [2024-06-10 11:51:42.323642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.323661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.334328] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.435 [2024-06-10 11:51:42.335406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.335426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.346075] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.435 [2024-06-10 11:51:42.347176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.347195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.357821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.435 [2024-06-10 11:51:42.358904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.358927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.369576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.435 [2024-06-10 11:51:42.370658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.370681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.381309] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.435 [2024-06-10 11:51:42.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.393069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.435 [2024-06-10 11:51:42.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.435 [2024-06-10 11:51:42.394169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.435 [2024-06-10 11:51:42.404830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.697 [2024-06-10 11:51:42.405928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.405948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.416573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.697 [2024-06-10 11:51:42.417631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.417651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.428314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.697 [2024-06-10 11:51:42.429395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.429414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.440061] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.697 [2024-06-10 11:51:42.441157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.441176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.451829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.697 [2024-06-10 11:51:42.452907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.452926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.463582] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.697 [2024-06-10 11:51:42.464692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.464711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.475357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.697 [2024-06-10 11:51:42.476405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.476425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.487119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.697 [2024-06-10 11:51:42.488165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.488185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.498888] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.697 [2024-06-10 11:51:42.500004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.500024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.510628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.697 [2024-06-10 11:51:42.511727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.511747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.522362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.697 [2024-06-10 11:51:42.523457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.523476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.534113] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.697 [2024-06-10 11:51:42.535198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.535217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.545869] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.697 [2024-06-10 11:51:42.546953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.546972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.557610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.697 [2024-06-10 11:51:42.558697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.558716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.569357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.697 [2024-06-10 11:51:42.570463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.570481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.581106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.697 [2024-06-10 11:51:42.582184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.582203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.592900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.697 [2024-06-10 11:51:42.593990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.594010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.604634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.697 [2024-06-10 11:51:42.605733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.605752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.616405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.697 [2024-06-10 11:51:42.617468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.628157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.697 [2024-06-10 11:51:42.629215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.639907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.697 [2024-06-10 11:51:42.640978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.640997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.651655] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.697 [2024-06-10 11:51:42.652726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.652744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.697 [2024-06-10 11:51:42.663402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.697 [2024-06-10 11:51:42.664482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.697 [2024-06-10 11:51:42.664505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.675181] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.960 [2024-06-10 11:51:42.676258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.676277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.686923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.960 [2024-06-10 11:51:42.688007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.688027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.698654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.960 [2024-06-10 11:51:42.699753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.699773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.710405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.960 [2024-06-10 11:51:42.711489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.711509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.722174] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.960 [2024-06-10 11:51:42.723257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.723276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.733926] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.960 [2024-06-10 11:51:42.735017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.735037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.745692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.960 [2024-06-10 11:51:42.746769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.746788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.757429] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.960 [2024-06-10 11:51:42.758521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.960 [2024-06-10 11:51:42.758540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.960 [2024-06-10 11:51:42.769158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.960 [2024-06-10 11:51:42.770246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.770269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.780903] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.961 [2024-06-10 11:51:42.781947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.781965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.792673] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.961 [2024-06-10 11:51:42.793754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.793774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.804413] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:13.961 [2024-06-10 11:51:42.805507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.805527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.816163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:13.961 [2024-06-10 11:51:42.817268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.817287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.827915] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:13.961 [2024-06-10 11:51:42.829009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.829029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.839666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:13.961 [2024-06-10 11:51:42.840759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.840779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.851467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:13.961 [2024-06-10 11:51:42.852568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.852587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.863226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:13.961 [2024-06-10 11:51:42.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.864342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.875005] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:13.961 [2024-06-10 11:51:42.876048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.876068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.886766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:13.961 [2024-06-10 11:51:42.887844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.887865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.898518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:13.961 [2024-06-10 11:51:42.899620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.899639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.910286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:13.961 [2024-06-10 11:51:42.911385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.911405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:13.961 [2024-06-10 11:51:42.922036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:13.961 [2024-06-10 11:51:42.923129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:13.961 [2024-06-10 11:51:42.923150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.933789] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:14.223 [2024-06-10 11:51:42.934838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.223 [2024-06-10 11:51:42.934857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.945579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:14.223 [2024-06-10 11:51:42.946662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.223 [2024-06-10 11:51:42.946684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.957319] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:14.223 [2024-06-10 11:51:42.958408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.223 [2024-06-10 11:51:42.958427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.969070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:14.223 [2024-06-10 11:51:42.970123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.223 [2024-06-10 11:51:42.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.980831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:14.223 [2024-06-10 11:51:42.981894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.223 [2024-06-10 11:51:42.981914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.223 [2024-06-10 11:51:42.992622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:14.224 [2024-06-10 11:51:42.993722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:42.993741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.004385] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:14.224 [2024-06-10 11:51:43.005472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.005492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.016161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:14.224 [2024-06-10 11:51:43.017257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.017277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.027938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:14.224 [2024-06-10 11:51:43.029015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.029035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.039684] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:14.224 [2024-06-10 11:51:43.040776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.040796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.051442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:14.224 [2024-06-10 11:51:43.052528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.052548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.063199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:14.224 [2024-06-10 11:51:43.064289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.064309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.074963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:14.224 [2024-06-10 11:51:43.076005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.076028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.086730] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6890 00:43:14.224 [2024-06-10 11:51:43.087792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.087812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.098565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:14.224 [2024-06-10 11:51:43.099730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.099750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.110383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:14.224 [2024-06-10 11:51:43.111467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.111486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.122161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef6a8 00:43:14.224 [2024-06-10 11:51:43.123239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.133949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f2948 00:43:14.224 [2024-06-10 11:51:43.135060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.135080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.145701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f5be8 00:43:14.224 [2024-06-10 11:51:43.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.146810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.157453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f8e88 00:43:14.224 [2024-06-10 11:51:43.158537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.158556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.169211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4578 00:43:14.224 [2024-06-10 11:51:43.170301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.170320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.180965] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:14.224 [2024-06-10 11:51:43.182051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.182070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.224 [2024-06-10 11:51:43.192962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f0350 00:43:14.224 [2024-06-10 11:51:43.194048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.224 [2024-06-10 11:51:43.194068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.206140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f35f0 00:43:14.486 [2024-06-10 11:51:43.207839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.207858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.217125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ed0b0 00:43:14.486 [2024-06-10 11:51:43.218340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.228794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190fb048 00:43:14.486 [2024-06-10 11:51:43.229990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.230009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.240594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f9f68 00:43:14.486 [2024-06-10 11:51:43.241794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.241813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.252394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f81e0 00:43:14.486 [2024-06-10 11:51:43.253568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.253587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.264186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f92c0 00:43:14.486 [2024-06-10 11:51:43.265382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.265400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.275963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f7100 00:43:14.486 [2024-06-10 11:51:43.277163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.486 [2024-06-10 11:51:43.277182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.486 [2024-06-10 11:51:43.287715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f4f40 00:43:14.486 [2024-06-10 11:51:43.288914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.288933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.299521] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6020 00:43:14.487 [2024-06-10 11:51:43.300721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.300740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.310497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f1430 00:43:14.487 [2024-06-10 11:51:43.311686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.311706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.323605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5ec8 00:43:14.487 [2024-06-10 11:51:43.324950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.324970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.335361] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e3d08 00:43:14.487 [2024-06-10 11:51:43.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.336742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.347133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eaef0 00:43:14.487 [2024-06-10 11:51:43.348507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.348527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.358927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e2c28 00:43:14.487 [2024-06-10 11:51:43.360298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.360317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.370727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e8088 00:43:14.487 [2024-06-10 11:51:43.372094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.372114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.382490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e9168 00:43:14.487 [2024-06-10 11:51:43.383817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.383841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.394292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ea248 00:43:14.487 [2024-06-10 11:51:43.395660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.395683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.406072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190fcdd0 00:43:14.487 [2024-06-10 11:51:43.407436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.407455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.417875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e12d8 00:43:14.487 [2024-06-10 11:51:43.419230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.419249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.429694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e01f8 00:43:14.487 [2024-06-10 11:51:43.431059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.441471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190df118 00:43:14.487 [2024-06-10 11:51:43.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.442859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.487 [2024-06-10 11:51:43.453266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190de038 00:43:14.487 [2024-06-10 11:51:43.454631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.487 [2024-06-10 11:51:43.454651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.465044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f1430 00:43:14.749 [2024-06-10 11:51:43.466414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.466434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.476838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ef270 00:43:14.749 [2024-06-10 11:51:43.478168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.478187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.488626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eb760 00:43:14.749 [2024-06-10 11:51:43.489988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.490008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.500446] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ec840 00:43:14.749 [2024-06-10 11:51:43.501815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.501835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.512235] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e5220 00:43:14.749 [2024-06-10 11:51:43.513602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.513621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.524021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e3060 00:43:14.749 [2024-06-10 11:51:43.525388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.535804] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e4140 00:43:14.749 [2024-06-10 11:51:43.537173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.537193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.547611] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1f80 00:43:14.749 [2024-06-10 11:51:43.548986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.549006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.559406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e73e0 00:43:14.749 [2024-06-10 11:51:43.560781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.560800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.571197] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e84c0 00:43:14.749 [2024-06-10 11:51:43.572536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.572555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.749 [2024-06-10 11:51:43.582962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e95a0 00:43:14.749 [2024-06-10 11:51:43.584331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.749 [2024-06-10 11:51:43.584350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.594751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190ea680 00:43:14.750 [2024-06-10 11:51:43.596115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.596135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.606512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e1710 00:43:14.750 [2024-06-10 11:51:43.607884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.607903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.618292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190e0630 00:43:14.750 [2024-06-10 11:51:43.619616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.619635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.630084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190df550 00:43:14.750 [2024-06-10 11:51:43.631451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.631471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.641855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190de470 00:43:14.750 [2024-06-10 11:51:43.643219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.643237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.653639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190feb58 00:43:14.750 [2024-06-10 11:51:43.655023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.655043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.665395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190eee38 00:43:14.750 [2024-06-10 11:51:43.666630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.666650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.677566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f6020 00:43:14.750 [2024-06-10 11:51:43.679094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.679113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:43:14.750 [2024-06-10 11:51:43.689519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44c10) with pdu=0x2000190f4f40 00:43:14.750 [2024-06-10 11:51:43.691040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:14.750 [2024-06-10 11:51:43.691063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:14.750 00:43:14.750 Latency(us) 00:43:14.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:14.750 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:14.750 nvme0n1 : 2.01 21638.98 84.53 0.00 0.00 5905.37 3017.39 13598.72 00:43:14.750 =================================================================================================================== 00:43:14.750 Total : 21638.98 84.53 0.00 0.00 5905.37 3017.39 13598.72 00:43:14.750 0 00:43:14.750 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:14.750 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:14.750 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:14.750 | .driver_specific 00:43:14.750 | .nvme_error 00:43:14.750 | .status_code 00:43:14.750 | .command_transient_transport_error' 00:43:14.750 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2506731 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2506731 ']' 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2506731 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:15.012 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2506731 00:43:15.274 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:15.274 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:15.274 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2506731' 00:43:15.274 killing process with pid 2506731 00:43:15.274 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2506731 00:43:15.274 Received shutdown signal, test time was about 2.000000 seconds 00:43:15.274 00:43:15.274 Latency(us) 00:43:15.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.274 =================================================================================================================== 00:43:15.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:15.274 11:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2506731 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2507409 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2507409 /var/tmp/bperf.sock 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2507409 ']' 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:15.274 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:15.274 [2024-06-10 11:51:44.173970] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:15.274 [2024-06-10 11:51:44.174026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507409 ] 00:43:15.274 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:15.274 Zero copy mechanism will not be used. 00:43:15.274 EAL: No free 2048 kB hugepages reported on node 1 00:43:15.274 [2024-06-10 11:51:44.231793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.536 [2024-06-10 11:51:44.294913] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:15.536 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:15.536 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:43:15.536 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:15.536 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:15.796 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:43:16.058 nvme0n1 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:43:16.058 11:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:16.058 I/O size of 131072 is greater than zero copy threshold (65536). 00:43:16.058 Zero copy mechanism will not be used. 00:43:16.058 Running I/O for 2 seconds... 00:43:16.058 [2024-06-10 11:51:45.022103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.058 [2024-06-10 11:51:45.022325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.058 [2024-06-10 11:51:45.022362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.320 [2024-06-10 11:51:45.035042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.320 [2024-06-10 11:51:45.035462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.320 [2024-06-10 11:51:45.035486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.320 [2024-06-10 11:51:45.045922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.320 [2024-06-10 11:51:45.046300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.320 [2024-06-10 11:51:45.046322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.320 [2024-06-10 11:51:45.055923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.320 [2024-06-10 11:51:45.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.320 [2024-06-10 11:51:45.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.320 [2024-06-10 11:51:45.065253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.065643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.065664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.075099] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.075509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.075529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.085555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.085942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.085963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.095216] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.095325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.095344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.105531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.105906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.105927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.114653] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.115068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.115089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.124661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.125064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.125084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.135527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.135931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.135952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.145013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.145279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.145299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.153873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.154227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.154248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.163679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.164049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.164069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.172955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.173344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.173364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.182494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.182855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.182877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.191584] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.191981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.192005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.202054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.202403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.202423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.210629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.210984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.211005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.219446] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.219817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.219838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.228101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.228463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.228484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.236564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.236957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.236978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.244295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.244666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.244692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.252760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.253116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.253138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.263332] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.263743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.274502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.274910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.274931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.321 [2024-06-10 11:51:45.285812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.321 [2024-06-10 11:51:45.286213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.321 [2024-06-10 11:51:45.286234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.584 [2024-06-10 11:51:45.296772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.584 [2024-06-10 11:51:45.297165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.584 [2024-06-10 11:51:45.297186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.584 [2024-06-10 11:51:45.308261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.584 [2024-06-10 11:51:45.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.584 [2024-06-10 11:51:45.308654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.584 [2024-06-10 11:51:45.319277] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.584 [2024-06-10 11:51:45.319658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.584 [2024-06-10 11:51:45.319684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.329899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.330293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.330313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.340945] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.341323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.341343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.351091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.351525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.351545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.362409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.362797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.362820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.373634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.384963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.385333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.385354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.394694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.395059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.395080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.401889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.402242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.402261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.411473] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.411751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.411770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.420089] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.420468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.420488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.430544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.430919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.430939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.438936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.439321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.439341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.447134] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.447482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.456729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.457125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.457145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.467119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.467506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.467526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.476993] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.477084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.477101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.488625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.489006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.489027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.499079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.499468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.499488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.510324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.510705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.510725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.521551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.521744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.521763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.532409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.532677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.532696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.543715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.544103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.544123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.585 [2024-06-10 11:51:45.554052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.585 [2024-06-10 11:51:45.554181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.585 [2024-06-10 11:51:45.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.565131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.565288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.565306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.576969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.577451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.577471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.587840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.588222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.598269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.598659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.598685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.608402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.608833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.608854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.618566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.618867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.848 [2024-06-10 11:51:45.618887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.848 [2024-06-10 11:51:45.628587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.848 [2024-06-10 11:51:45.629011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.629032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.638389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.638729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.638749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.648383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.648601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.648621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.658791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.659227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.659247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.668969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.669346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.669366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.678627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.679079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.679100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.687418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.687749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.687769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.697849] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.698180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.698200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.708592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.718407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.718694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.718721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.727811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.728139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.728160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.737854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.738169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.738188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.747560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.747946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.747967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.757640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.758021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.758041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.767367] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.767827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.777212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.777534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.787089] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.787437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.787457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.798675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.799091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.799111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:16.849 [2024-06-10 11:51:45.808821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:16.849 [2024-06-10 11:51:45.809111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:16.849 [2024-06-10 11:51:45.809132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.819390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.819625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.819647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.830328] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.830626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.830647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.840773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.841169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.841189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.850983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.851328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.860777] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.861091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.870561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.870826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.870846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.881875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.882282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.882302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.892649] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.893012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.893036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.904208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.904641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.904660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.915922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.916346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.916366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.927955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.928363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.928384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.939315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.939633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.939653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.950572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.951030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.951050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.962418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.962996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.963016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.974959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.975290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.975310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.987483] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.987719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.987738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:45.997508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:45.997846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:45.997866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.008417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.008753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.008775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.018941] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.019301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.019322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.030882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.031332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.031351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.042049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.042530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.042551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.054510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.054796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.054816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.066281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.066714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.066734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.112 [2024-06-10 11:51:46.078038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.112 [2024-06-10 11:51:46.078436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.112 [2024-06-10 11:51:46.078457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.375 [2024-06-10 11:51:46.089828] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.375 [2024-06-10 11:51:46.090356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.375 [2024-06-10 11:51:46.090376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.375 [2024-06-10 11:51:46.100921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.375 [2024-06-10 11:51:46.101348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.375 [2024-06-10 11:51:46.101368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.375 [2024-06-10 11:51:46.111730] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.375 [2024-06-10 11:51:46.112178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.375 [2024-06-10 11:51:46.112198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.375 [2024-06-10 11:51:46.123073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.375 [2024-06-10 11:51:46.123261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.375 [2024-06-10 11:51:46.123280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.135184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.135544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.135564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.147735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.147973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.147993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.160369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.160691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.160712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.170900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.171251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.171272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.181291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.181613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.181634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.189893] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.190274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.190298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.199274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.199612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.199633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.207940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.208266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.208286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.216941] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.217180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.217199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.226860] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.227123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.227143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.234545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.234936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.243459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.243794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.243814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.251369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.251730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.251750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.259193] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.259561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.259581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.267974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.268227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.268247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.276105] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.276427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.276447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.283819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.284043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.284064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.291851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.292188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.292208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.301096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.301301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.309686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.310056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.310076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.319149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.319487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.319507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.328002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.328218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.328237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.376 [2024-06-10 11:51:46.336867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.376 [2024-06-10 11:51:46.337073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.376 [2024-06-10 11:51:46.337092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.346865] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.347297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.347318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.357211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.357455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.366996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.367367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.367387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.376632] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.376873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.376893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.387068] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.387524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.387545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.396665] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.397063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.397083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.406437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.406738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.406759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.417064] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.417270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.417289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.426822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.427029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.435029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.435483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.435503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.444619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.444931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.454612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.454998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.465321] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.465763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.465783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.476116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.476616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.476636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.487998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.488368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.498938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.499342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.509203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.509559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.509579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.519242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.519645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.519666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.529010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.639 [2024-06-10 11:51:46.529247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.639 [2024-06-10 11:51:46.529267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.639 [2024-06-10 11:51:46.538760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.539142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.539163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.548042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.548369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.548389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.555959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.556322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.556342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.561911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.562273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.562292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.567935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.568144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.568163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.572744] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.573241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.573262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.577957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.578157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.578176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.584079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.584396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.584417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.592101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.592409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.592430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.640 [2024-06-10 11:51:46.602767] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.640 [2024-06-10 11:51:46.603010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.640 [2024-06-10 11:51:46.603031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.902 [2024-06-10 11:51:46.614336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.902 [2024-06-10 11:51:46.614682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.902 [2024-06-10 11:51:46.614703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.902 [2024-06-10 11:51:46.626518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.902 [2024-06-10 11:51:46.626946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.902 [2024-06-10 11:51:46.626965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.902 [2024-06-10 11:51:46.637575] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.902 [2024-06-10 11:51:46.637890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.902 [2024-06-10 11:51:46.637911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.902 [2024-06-10 11:51:46.648443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.902 [2024-06-10 11:51:46.648779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.902 [2024-06-10 11:51:46.648799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.902 [2024-06-10 11:51:46.658667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.658922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.668204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.668406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.668429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.678349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.678769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.678789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.688318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.688761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.688782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.697317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.697713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.697734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.707066] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.707543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.707564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.716477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.716692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.716711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.725912] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.726251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.726271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.735369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.735679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.735699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.742579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.742880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.742901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.751408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.751804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.760093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.760333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.760355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.766560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.766817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.766836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.771814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.772001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.772020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.778283] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.778629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.778651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.784716] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.785081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.785101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.793752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.794069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.799735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.799919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.799937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.808415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.808796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.808820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.815281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.815644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.815664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.823987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.824176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.824195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.829571] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.829765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.829784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.834647] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.834846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.834865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.840606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.840823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.840842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.847969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.848156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.848174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.854202] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.854390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.854409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.859854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.903 [2024-06-10 11:51:46.860078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.903 [2024-06-10 11:51:46.860097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:17.903 [2024-06-10 11:51:46.866954] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:17.904 [2024-06-10 11:51:46.867262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:17.904 [2024-06-10 11:51:46.867282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.874524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.874817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.874840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.881914] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.882165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.888944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.889223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.889243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.897132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.897452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.897472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.905280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.905553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.905573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.911609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.911790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.911809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.915995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.916176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.916195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.920529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.920765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.920783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.925058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.925341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.935037] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.935352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.935371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.944514] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.944952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.944972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.952722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.953227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.953247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:18.166 [2024-06-10 11:51:46.962500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.166 [2024-06-10 11:51:46.962797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.166 [2024-06-10 11:51:46.962816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:18.167 [2024-06-10 11:51:46.973443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.167 [2024-06-10 11:51:46.973780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.167 [2024-06-10 11:51:46.973799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:18.167 [2024-06-10 11:51:46.984024] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.167 [2024-06-10 11:51:46.984355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.167 [2024-06-10 11:51:46.984375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:43:18.167 [2024-06-10 11:51:46.994981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.167 [2024-06-10 11:51:46.995360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.167 [2024-06-10 11:51:46.995380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:43:18.167 [2024-06-10 11:51:47.006256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f44f50) with pdu=0x2000190fef90 00:43:18.167 [2024-06-10 11:51:47.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:18.167 [2024-06-10 11:51:47.006542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:43:18.167 00:43:18.167 Latency(us) 00:43:18.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.167 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:43:18.167 nvme0n1 : 2.01 3228.53 403.57 0.00 0.00 4945.41 1897.81 13271.04 00:43:18.167 =================================================================================================================== 00:43:18.167 Total : 3228.53 403.57 0.00 0.00 4945.41 1897.81 13271.04 00:43:18.167 0 00:43:18.167 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:43:18.167 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:43:18.167 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:43:18.167 | .driver_specific 00:43:18.167 | .nvme_error 00:43:18.167 | .status_code 00:43:18.167 | .command_transient_transport_error' 00:43:18.167 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2507409 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2507409 ']' 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2507409 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2507409 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2507409' 00:43:18.429 killing process with pid 2507409 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2507409 00:43:18.429 Received shutdown signal, test time was about 2.000000 seconds 00:43:18.429 00:43:18.429 Latency(us) 00:43:18.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.429 =================================================================================================================== 00:43:18.429 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:18.429 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2507409 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2505334 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2505334 ']' 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2505334 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2505334 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2505334' 00:43:18.690 killing process with pid 2505334 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2505334 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2505334 00:43:18.690 00:43:18.690 real 0m14.582s 00:43:18.690 user 0m28.820s 00:43:18.690 sys 0m3.170s 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:43:18.690 ************************************ 00:43:18.690 END TEST nvmf_digest_error 00:43:18.690 ************************************ 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:18.690 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:18.690 rmmod nvme_tcp 00:43:18.952 rmmod nvme_fabrics 00:43:18.952 rmmod nvme_keyring 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2505334 ']' 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2505334 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 2505334 ']' 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 2505334 00:43:18.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2505334) - No such process 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 2505334 is not found' 00:43:18.952 Process with pid 2505334 is not found 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:18.952 11:51:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:20.866 11:51:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:20.866 00:43:20.866 real 0m38.478s 00:43:20.866 user 0m58.984s 00:43:20.866 sys 0m11.887s 00:43:20.866 11:51:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:20.866 11:51:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:43:20.866 ************************************ 00:43:20.866 END TEST nvmf_digest 00:43:20.866 ************************************ 00:43:21.127 11:51:49 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:43:21.127 11:51:49 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:43:21.127 11:51:49 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:43:21.127 11:51:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:43:21.127 11:51:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:21.127 11:51:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:21.127 11:51:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:21.127 ************************************ 00:43:21.127 START TEST nvmf_bdevperf 00:43:21.127 ************************************ 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:43:21.128 * Looking for test storage... 00:43:21.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.128 11:51:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:43:21.128 11:51:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:29.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:29.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:29.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:29.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:29.272 11:51:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:29.272 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:29.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:29.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:43:29.273 00:43:29.273 --- 10.0.0.2 ping statistics --- 00:43:29.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.273 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:29.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:29.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:43:29.273 00:43:29.273 --- 10.0.0.1 ping statistics --- 00:43:29.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.273 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2512118 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2512118 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2512118 ']' 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:29.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:29.273 11:51:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.273 [2024-06-10 11:51:57.286684] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:29.273 [2024-06-10 11:51:57.286749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:29.273 EAL: No free 2048 kB hugepages reported on node 1 00:43:29.273 [2024-06-10 11:51:57.360303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:29.273 [2024-06-10 11:51:57.435762] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:29.273 [2024-06-10 11:51:57.435802] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:29.273 [2024-06-10 11:51:57.435810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:29.273 [2024-06-10 11:51:57.435816] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:29.273 [2024-06-10 11:51:57.435822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:29.273 [2024-06-10 11:51:57.435955] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:29.273 [2024-06-10 11:51:57.436166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:29.273 [2024-06-10 11:51:57.436167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.273 [2024-06-10 11:51:58.212080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:29.273 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.534 Malloc0 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:29.534 [2024-06-10 11:51:58.286115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:29.534 { 00:43:29.534 "params": { 00:43:29.534 "name": "Nvme$subsystem", 00:43:29.534 "trtype": "$TEST_TRANSPORT", 00:43:29.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:29.534 "adrfam": "ipv4", 00:43:29.534 "trsvcid": "$NVMF_PORT", 00:43:29.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:29.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:29.534 "hdgst": ${hdgst:-false}, 00:43:29.534 "ddgst": ${ddgst:-false} 00:43:29.534 }, 00:43:29.534 "method": "bdev_nvme_attach_controller" 00:43:29.534 } 00:43:29.534 EOF 00:43:29.534 )") 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:43:29.534 11:51:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:29.534 "params": { 00:43:29.534 "name": "Nvme1", 00:43:29.534 "trtype": "tcp", 00:43:29.534 "traddr": "10.0.0.2", 00:43:29.534 "adrfam": "ipv4", 00:43:29.534 "trsvcid": "4420", 00:43:29.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:29.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:29.534 "hdgst": false, 00:43:29.534 "ddgst": false 00:43:29.534 }, 00:43:29.534 "method": "bdev_nvme_attach_controller" 00:43:29.534 }' 00:43:29.534 [2024-06-10 11:51:58.341492] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:29.534 [2024-06-10 11:51:58.341541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512445 ] 00:43:29.534 EAL: No free 2048 kB hugepages reported on node 1 00:43:29.534 [2024-06-10 11:51:58.399499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.534 [2024-06-10 11:51:58.463977] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.104 Running I/O for 1 seconds... 00:43:31.056 00:43:31.056 Latency(us) 00:43:31.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.056 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:31.056 Verification LBA range: start 0x0 length 0x4000 00:43:31.056 Nvme1n1 : 1.01 9093.38 35.52 0.00 0.00 13989.21 2717.01 14417.92 00:43:31.056 =================================================================================================================== 00:43:31.056 Total : 9093.38 35.52 0.00 0.00 13989.21 2717.01 14417.92 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2512787 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:31.056 { 00:43:31.056 "params": { 00:43:31.056 "name": "Nvme$subsystem", 00:43:31.056 "trtype": "$TEST_TRANSPORT", 00:43:31.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:31.056 "adrfam": "ipv4", 00:43:31.056 "trsvcid": "$NVMF_PORT", 00:43:31.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:31.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:31.056 "hdgst": ${hdgst:-false}, 00:43:31.056 "ddgst": ${ddgst:-false} 00:43:31.056 }, 00:43:31.056 "method": "bdev_nvme_attach_controller" 00:43:31.056 } 00:43:31.056 EOF 00:43:31.056 )") 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:43:31.056 11:51:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:31.056 "params": { 00:43:31.056 "name": "Nvme1", 00:43:31.056 "trtype": "tcp", 00:43:31.056 "traddr": "10.0.0.2", 00:43:31.056 "adrfam": "ipv4", 00:43:31.056 "trsvcid": "4420", 00:43:31.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:31.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:31.056 "hdgst": false, 00:43:31.056 "ddgst": false 00:43:31.056 }, 00:43:31.056 "method": "bdev_nvme_attach_controller" 00:43:31.056 }' 00:43:31.056 [2024-06-10 11:51:59.998885] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:31.056 [2024-06-10 11:51:59.998938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2512787 ] 00:43:31.056 EAL: No free 2048 kB hugepages reported on node 1 00:43:31.317 [2024-06-10 11:52:00.059161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.317 [2024-06-10 11:52:00.125693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.577 Running I/O for 15 seconds... 00:43:34.125 11:52:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2512118 00:43:34.125 11:52:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:43:34.125 [2024-06-10 11:52:02.967521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:34.125 [2024-06-10 11:52:02.967566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:34.125 [2024-06-10 11:52:02.967588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:34.125 [2024-06-10 11:52:02.967606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:34.125 [2024-06-10 11:52:02.967626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.125 [2024-06-10 11:52:02.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.125 [2024-06-10 11:52:02.967856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.125 [2024-06-10 11:52:02.967863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.967986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.967999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.126 [2024-06-10 11:52:02.968576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.126 [2024-06-10 11:52:02.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.126 [2024-06-10 11:52:02.968602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.127 [2024-06-10 11:52:02.968714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.968992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.968999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.127 [2024-06-10 11:52:02.969270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.127 [2024-06-10 11:52:02.969280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:43:34.128 [2024-06-10 11:52:02.969790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:34.128 [2024-06-10 11:52:02.969908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5340 is same with the state(5) to be set 00:43:34.128 [2024-06-10 11:52:02.969924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:43:34.128 [2024-06-10 11:52:02.969930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:43:34.128 [2024-06-10 11:52:02.969937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60400 len:8 PRP1 0x0 PRP2 0x0 00:43:34.128 [2024-06-10 11:52:02.969944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:34.128 [2024-06-10 11:52:02.969982] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xad5340 was disconnected and freed. reset controller. 00:43:34.128 [2024-06-10 11:52:02.973513] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:02.973535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:02.974337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:02.974354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:02.974364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:02.974585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:02.974810] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:02.974819] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:02.974827] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:02.978379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:02.987594] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:02.988169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:02.988188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:02.988196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:02.988416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:02.988636] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:02.988645] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:02.988652] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:02.992206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.001419] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.002080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.002119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.002135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.002376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.002606] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.002617] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.002626] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.006206] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.015244] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.015970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.016008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.016019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.016258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.016482] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.016491] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.016499] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.020071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.029092] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.029911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.029949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.029959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.030198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.030422] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.030432] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.030439] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.034008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.043036] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.043635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.043654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.043662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.043888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.044109] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.044122] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.044129] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.047689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.056912] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.057515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.057531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.057539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.057764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.057985] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.057994] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.058001] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.061552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.070777] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.071389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.071404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.071412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.071631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.071856] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.071866] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.071874] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.075424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.129 [2024-06-10 11:52:03.084645] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.129 [2024-06-10 11:52:03.085254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.129 [2024-06-10 11:52:03.085270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.129 [2024-06-10 11:52:03.085277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.129 [2024-06-10 11:52:03.085496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.129 [2024-06-10 11:52:03.085721] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.129 [2024-06-10 11:52:03.085730] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.129 [2024-06-10 11:52:03.085737] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.129 [2024-06-10 11:52:03.089290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.098513] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.099096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.099112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.099120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.099339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.099559] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.099567] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.099574] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.103130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.112360] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.113003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.113020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.113028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.113247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.113466] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.113475] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.113483] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.117044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.126260] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.126878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.126894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.126902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.127121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.127340] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.127349] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.127356] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.131014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.140244] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.140754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.140771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.140779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.141002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.141223] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.141231] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.141238] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.144793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.154222] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.154830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.154846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.154854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.155073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.155293] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.155301] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.155308] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.158860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.168078] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.168681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.168697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.168705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.168924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.169145] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.392 [2024-06-10 11:52:03.169154] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.392 [2024-06-10 11:52:03.169160] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.392 [2024-06-10 11:52:03.172714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.392 [2024-06-10 11:52:03.181934] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.392 [2024-06-10 11:52:03.182375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.392 [2024-06-10 11:52:03.182393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.392 [2024-06-10 11:52:03.182400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.392 [2024-06-10 11:52:03.182621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.392 [2024-06-10 11:52:03.182849] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.182860] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.182870] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.186424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.196081] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.196760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.196798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.196808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.197047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.197270] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.197280] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.197287] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.200851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.210078] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.210750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.210788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.210800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.211040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.211264] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.211274] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.211282] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.214848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.224066] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.224763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.224802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.224814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.225056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.225279] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.225289] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.225297] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.228861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.237872] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.238379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.238402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.238410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.238630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.238855] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.238865] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.238872] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.242421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.251842] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.252488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.252526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.252537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.252782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.253006] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.253016] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.253023] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.256577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.265799] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.266369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.266389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.266397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.266617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.266843] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.266853] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.266860] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.270408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.279616] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.280231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.280247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.280255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.280475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.280705] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.280715] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.280722] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.284268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.293473] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.294045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.294061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.294068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.294288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.294508] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.294516] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.294523] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.298083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.393 [2024-06-10 11:52:03.307308] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.393 [2024-06-10 11:52:03.307922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.393 [2024-06-10 11:52:03.307939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.393 [2024-06-10 11:52:03.307947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.393 [2024-06-10 11:52:03.308166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.393 [2024-06-10 11:52:03.308386] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.393 [2024-06-10 11:52:03.308395] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.393 [2024-06-10 11:52:03.308401] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.393 [2024-06-10 11:52:03.311952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.394 [2024-06-10 11:52:03.321157] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.394 [2024-06-10 11:52:03.321896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.394 [2024-06-10 11:52:03.321934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.394 [2024-06-10 11:52:03.321944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.394 [2024-06-10 11:52:03.322183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.394 [2024-06-10 11:52:03.322407] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.394 [2024-06-10 11:52:03.322416] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.394 [2024-06-10 11:52:03.322424] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.394 [2024-06-10 11:52:03.325993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.394 [2024-06-10 11:52:03.335004] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.394 [2024-06-10 11:52:03.335718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.394 [2024-06-10 11:52:03.335756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.394 [2024-06-10 11:52:03.335767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.394 [2024-06-10 11:52:03.336007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.394 [2024-06-10 11:52:03.336231] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.394 [2024-06-10 11:52:03.336241] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.394 [2024-06-10 11:52:03.336249] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.394 [2024-06-10 11:52:03.339812] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.394 [2024-06-10 11:52:03.348819] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.394 [2024-06-10 11:52:03.349531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.394 [2024-06-10 11:52:03.349569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.394 [2024-06-10 11:52:03.349579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.394 [2024-06-10 11:52:03.349827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.394 [2024-06-10 11:52:03.350052] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.394 [2024-06-10 11:52:03.350062] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.394 [2024-06-10 11:52:03.350069] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.394 [2024-06-10 11:52:03.353623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.656 [2024-06-10 11:52:03.362627] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.656 [2024-06-10 11:52:03.363339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.363377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.363388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.363627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.363859] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.363870] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.363878] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.367431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.376435] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.377145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.377183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.377198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.377437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.377660] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.377679] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.377687] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.381240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.390246] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.390970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.391007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.391018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.391256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.391481] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.391491] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.391499] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.395062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.404067] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.404767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.404805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.404817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.405057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.405280] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.405290] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.405297] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.408870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.417878] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.418587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.418624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.418634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.418882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.419106] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.419123] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.419131] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.422688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.431698] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.432418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.432455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.432466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.432714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.432939] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.432948] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.432956] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.436509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.445512] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.446144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.446182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.446193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.446432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.446655] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.446665] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.446681] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.450238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.459454] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.460166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.460204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.460214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.460453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.460686] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.460697] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.460704] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.464260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.473265] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.473985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.474022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.474033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.474271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.474495] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.474505] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.474512] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.478075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.487090] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.487788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.487826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.657 [2024-06-10 11:52:03.487838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.657 [2024-06-10 11:52:03.488078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.657 [2024-06-10 11:52:03.488302] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.657 [2024-06-10 11:52:03.488312] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.657 [2024-06-10 11:52:03.488320] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.657 [2024-06-10 11:52:03.491883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.657 [2024-06-10 11:52:03.500898] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.657 [2024-06-10 11:52:03.501473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.657 [2024-06-10 11:52:03.501491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.501499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.501725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.501946] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.501955] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.501962] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.505506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.514732] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.515307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.515323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.515330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.515554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.515814] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.515824] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.515831] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.519381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.528586] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.529288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.529326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.529336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.529575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.529808] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.529819] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.529827] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.533387] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.542396] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.543056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.543094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.543104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.543343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.543566] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.543576] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.543584] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.547147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.556360] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.557015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.557054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.557065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.557303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.557527] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.557536] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.557547] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.561110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.570322] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.571000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.571039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.571049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.571287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.571511] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.571521] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.571528] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.575091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.584307] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.584938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.584976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.584987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.585225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.585448] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.585458] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.585465] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.589028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.598243] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.598871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.598909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.598919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.599157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.599380] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.599390] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.599398] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.602959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.612182] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.658 [2024-06-10 11:52:03.612806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.658 [2024-06-10 11:52:03.612830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.658 [2024-06-10 11:52:03.612838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.658 [2024-06-10 11:52:03.613059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.658 [2024-06-10 11:52:03.613278] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.658 [2024-06-10 11:52:03.613287] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.658 [2024-06-10 11:52:03.613294] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.658 [2024-06-10 11:52:03.616848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.658 [2024-06-10 11:52:03.626055] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.920 [2024-06-10 11:52:03.626598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.920 [2024-06-10 11:52:03.626614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.920 [2024-06-10 11:52:03.626623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.920 [2024-06-10 11:52:03.626850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.920 [2024-06-10 11:52:03.627071] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.920 [2024-06-10 11:52:03.627080] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.920 [2024-06-10 11:52:03.627087] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.920 [2024-06-10 11:52:03.630633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.920 [2024-06-10 11:52:03.639846] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.920 [2024-06-10 11:52:03.640421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.640437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.640444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.640663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.640889] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.640898] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.640905] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.644451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.653656] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.654356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.654394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.654405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.654644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.654880] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.654891] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.654899] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.658451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.667456] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.668180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.668218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.668228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.668467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.668699] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.668710] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.668718] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.672273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.681279] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.681980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.682018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.682029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.682268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.682491] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.682501] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.682509] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.686073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.695079] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.695749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.695787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.695799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.696039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.696263] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.696272] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.696280] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.699843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.709073] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.709744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.709782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.709793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.710032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.710256] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.710265] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.710273] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.713837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.723050] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.723768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.723807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.723817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.724056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.724279] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.724289] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.724297] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.727862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.736871] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.737513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.737551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.737561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.737808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.738032] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.738041] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.738049] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.741612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.750826] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.751440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.751459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.751471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.751696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.751917] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.751926] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.921 [2024-06-10 11:52:03.751933] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.921 [2024-06-10 11:52:03.755481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.921 [2024-06-10 11:52:03.764693] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.921 [2024-06-10 11:52:03.765389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.921 [2024-06-10 11:52:03.765427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.921 [2024-06-10 11:52:03.765438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.921 [2024-06-10 11:52:03.765686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.921 [2024-06-10 11:52:03.765910] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.921 [2024-06-10 11:52:03.765920] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.765928] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.769484] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.778515] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.779212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.779251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.779261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.779499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.779731] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.779742] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.779749] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.783302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.792517] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.793194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.793232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.793243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.793482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.793714] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.793729] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.793738] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.797292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.806507] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.807109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.807128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.807136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.807356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.807576] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.807585] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.807592] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.811145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.820353] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.821038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.821076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.821087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.821326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.821549] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.821559] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.821567] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.825130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.834344] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.835002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.835040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.835050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.835289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.835512] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.835522] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.835529] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.839091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.848310] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.848935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.848973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.848984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.849223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.849446] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.849455] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.849463] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.853026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.862246] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.862963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.863001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.863012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.863250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.863474] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.863483] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.863491] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.867054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.876059] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:34.922 [2024-06-10 11:52:03.876661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:34.922 [2024-06-10 11:52:03.876705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:34.922 [2024-06-10 11:52:03.876717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:34.922 [2024-06-10 11:52:03.876957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:34.922 [2024-06-10 11:52:03.877181] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:34.922 [2024-06-10 11:52:03.877191] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:34.922 [2024-06-10 11:52:03.877198] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:34.922 [2024-06-10 11:52:03.880755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:34.922 [2024-06-10 11:52:03.889976] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.890645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.890692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.890708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.890955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.891181] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.891190] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.891198] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.894763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.903792] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.904412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.904431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.904439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.904659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.904886] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.904897] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.904904] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.908467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.917691] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.918261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.918277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.918285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.918504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.918729] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.918739] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.918745] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.922295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.931519] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.932097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.932113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.932120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.932339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.932558] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.932567] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.932578] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.936134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.945357] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.945960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.945977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.945984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.946204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.946424] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.946432] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.946440] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.950003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.959239] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.959784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.959801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.959809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.960028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.960248] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.185 [2024-06-10 11:52:03.960256] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.185 [2024-06-10 11:52:03.960263] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.185 [2024-06-10 11:52:03.963825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.185 [2024-06-10 11:52:03.973060] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.185 [2024-06-10 11:52:03.973764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.185 [2024-06-10 11:52:03.973803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.185 [2024-06-10 11:52:03.973814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.185 [2024-06-10 11:52:03.974052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.185 [2024-06-10 11:52:03.974275] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:03.974285] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:03.974295] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:03.977863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:03.986894] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:03.987489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:03.987512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:03.987521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:03.987747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:03.987970] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:03.987979] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:03.987986] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:03.991545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.000860] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.001468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.001485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.001493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.001716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.001937] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.001946] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.001953] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.005513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.014756] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.015463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.015502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.015512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.015768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.015994] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.016003] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.016011] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.019565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.028600] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.029238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.029257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.029265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.029485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.029716] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.029726] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.029733] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.033290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.042522] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.043111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.043128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.043136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.043355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.043574] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.043583] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.043590] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.047153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.056382] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.057044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.057082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.057093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.057332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.057555] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.057565] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.057572] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.061142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.070369] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.071042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.071080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.071090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.071329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.071552] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.071562] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.071569] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.075143] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.084192] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.084900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.084938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.084948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.085187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.085411] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.085421] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.085428] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.088998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.098046] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.098763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.098801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.098813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.099053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.099276] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.099286] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.099294] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.102862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.111888] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.186 [2024-06-10 11:52:04.112585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.186 [2024-06-10 11:52:04.112623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.186 [2024-06-10 11:52:04.112634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.186 [2024-06-10 11:52:04.112886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.186 [2024-06-10 11:52:04.113111] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.186 [2024-06-10 11:52:04.113120] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.186 [2024-06-10 11:52:04.113128] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.186 [2024-06-10 11:52:04.116686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.186 [2024-06-10 11:52:04.125696] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.187 [2024-06-10 11:52:04.126399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.187 [2024-06-10 11:52:04.126436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.187 [2024-06-10 11:52:04.126454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.187 [2024-06-10 11:52:04.126708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.187 [2024-06-10 11:52:04.126934] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.187 [2024-06-10 11:52:04.126943] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.187 [2024-06-10 11:52:04.126951] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.187 [2024-06-10 11:52:04.130506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.187 [2024-06-10 11:52:04.139528] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.187 [2024-06-10 11:52:04.140135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.187 [2024-06-10 11:52:04.140154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.187 [2024-06-10 11:52:04.140162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.187 [2024-06-10 11:52:04.140381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.187 [2024-06-10 11:52:04.140602] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.187 [2024-06-10 11:52:04.140610] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.187 [2024-06-10 11:52:04.140617] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.187 [2024-06-10 11:52:04.144187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.187 [2024-06-10 11:52:04.153430] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.187 [2024-06-10 11:52:04.154014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.187 [2024-06-10 11:52:04.154031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.187 [2024-06-10 11:52:04.154038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.187 [2024-06-10 11:52:04.154257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.187 [2024-06-10 11:52:04.154477] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.187 [2024-06-10 11:52:04.154485] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.187 [2024-06-10 11:52:04.154492] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.158058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.167287] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.167863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.167880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.167887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.168107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.168326] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.168339] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.168346] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.171908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.181136] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.181704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.181721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.181728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.181948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.182167] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.182176] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.182183] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.185743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.195157] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.195729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.195747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.195754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.195975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.196194] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.196204] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.196211] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.199772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.209013] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.209613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.209629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.209636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.209862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.210082] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.210091] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.210098] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.213653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.222882] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.223452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.223468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.223475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.223700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.223920] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.223929] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.223936] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.227490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.236724] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.237279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.237294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.237302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.237521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.237746] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.237755] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.237762] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.241340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.250587] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.251166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.251182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.251190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.251409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.251629] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.251637] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.251644] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.255203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.264442] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.265067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.265083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.265091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.265314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.265535] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.265544] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.265550] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.269113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.278343] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.278827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.278844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.278852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.279072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.279292] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.449 [2024-06-10 11:52:04.279301] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.449 [2024-06-10 11:52:04.279308] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.449 [2024-06-10 11:52:04.282870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.449 [2024-06-10 11:52:04.292304] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.449 [2024-06-10 11:52:04.292808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.449 [2024-06-10 11:52:04.292824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.449 [2024-06-10 11:52:04.292832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.449 [2024-06-10 11:52:04.293052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.449 [2024-06-10 11:52:04.293271] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.293279] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.293286] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.296840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.306282] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.306874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.306891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.306898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.307117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.307342] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.307353] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.307363] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.310937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.320166] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.320780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.320797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.320804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.321024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.321244] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.321254] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.321260] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.324818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.334044] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.334649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.334665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.334679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.334898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.335118] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.335127] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.335134] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.338695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.347942] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.348515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.348532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.348539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.348773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.348996] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.349006] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.349012] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.352567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.361818] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.362382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.362401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.362408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.362627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.362856] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.362866] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.362872] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.366431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.375678] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.376281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.376298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.376305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.376524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.376762] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.376772] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.376779] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.380331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.389576] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.390170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.390186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.390193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.390412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.390632] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.390642] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.390648] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.394216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.403454] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.404056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.404073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.404080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.404299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.404523] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.404532] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.450 [2024-06-10 11:52:04.404539] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.450 [2024-06-10 11:52:04.408111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.450 [2024-06-10 11:52:04.417359] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.450 [2024-06-10 11:52:04.417916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.450 [2024-06-10 11:52:04.417932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.450 [2024-06-10 11:52:04.417939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.450 [2024-06-10 11:52:04.418158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.450 [2024-06-10 11:52:04.418378] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.450 [2024-06-10 11:52:04.418387] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.451 [2024-06-10 11:52:04.418394] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.421957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.431198] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.431756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.431773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.431781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.432000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.432220] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.432229] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.432236] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.435798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.445042] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.445644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.445659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.445667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.445893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.446113] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.446122] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.446129] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.449697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.458937] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.459495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.459512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.459519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.459743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.459964] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.459973] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.459980] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.463536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.472788] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.473386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.473402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.473410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.473629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.473854] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.473864] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.473871] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.477429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.486676] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.487248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.487263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.487271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.487490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.487716] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.487727] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.487734] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.491295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.500539] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.501154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.501171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.501182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.501401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.501621] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.501630] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.501637] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.505202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.514458] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.515040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.515057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.515064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.515283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.515503] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.515512] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.515519] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.519081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.528321] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.528830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.528848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.528855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.529075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.529294] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.529303] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.529310] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.532872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.542332] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.542773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.542790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.542797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.543016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.543236] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.543249] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.543257] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.546822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.556269] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.556826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.556843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.556851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.557071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.557290] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.557298] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.557305] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.560866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.570102] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.570710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.570727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.570735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.570954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.571174] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.571183] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.571190] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.574749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.583984] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.584548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.584564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.584571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.584808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.585029] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.585039] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.585046] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.588598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.597844] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.598299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.598317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.598324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.598544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.598772] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.598783] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.598790] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.602350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.611816] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.612420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.612436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.612443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.612662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.612895] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.612906] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.612913] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.616464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.625716] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.626403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.626440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.626450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.626700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.626924] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.626934] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.626941] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.630515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.639559] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.640108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.640127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.640135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.640359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.640580] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.640589] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.640596] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.644166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.653416] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.654036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.654053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.654061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.654280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.654500] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.654509] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.654516] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.658081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.667326] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.668436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.668460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.712 [2024-06-10 11:52:04.668468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.712 [2024-06-10 11:52:04.668708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.712 [2024-06-10 11:52:04.668932] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.712 [2024-06-10 11:52:04.668941] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.712 [2024-06-10 11:52:04.668948] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.712 [2024-06-10 11:52:04.672506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.712 [2024-06-10 11:52:04.681338] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.712 [2024-06-10 11:52:04.681915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.712 [2024-06-10 11:52:04.681932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.681941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.682162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.682384] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.682393] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.682404] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.685974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.695240] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.695821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.695838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.695846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.696066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.696286] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.696295] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.696302] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.699869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.709130] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.709657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.709679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.709687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.709906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.710126] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.710134] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.710141] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.713702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.722941] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.723544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.723560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.723567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.723793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.724013] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.724022] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.724029] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.727585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.736838] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.737399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.737418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.737425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.737644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.737870] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.737880] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.737888] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.741449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.750693] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.751290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.751306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.751313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.751532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.751758] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.751767] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.751775] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.755338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.764590] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.765207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.765224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.765232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.765451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.765676] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.765686] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.765693] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.769249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.778491] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.779103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.779120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.779127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.779346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.779569] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.779578] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.779585] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.783149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.792390] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.792958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.792975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.792983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.793202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.793421] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.793431] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.793438] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.797001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.806242] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.806809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.975 [2024-06-10 11:52:04.806827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.975 [2024-06-10 11:52:04.806834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.975 [2024-06-10 11:52:04.807054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.975 [2024-06-10 11:52:04.807273] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.975 [2024-06-10 11:52:04.807282] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.975 [2024-06-10 11:52:04.807289] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.975 [2024-06-10 11:52:04.810864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.975 [2024-06-10 11:52:04.820102] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.975 [2024-06-10 11:52:04.820706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.820723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.820730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.820950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.821169] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.821178] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.821184] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.824749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.833988] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.834594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.834610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.834618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.834843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.835064] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.835073] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.835080] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.838637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.847879] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.848435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.848451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.848458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.848688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.848913] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.848923] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.848930] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.852482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.861731] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.862332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.862349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.862356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.862575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.862801] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.862811] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.862818] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.866377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.875625] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.876193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.876209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.876220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.876440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.876659] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.876674] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.876681] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.880242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.889492] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.890102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.890120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.890127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.890348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.890568] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.890578] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.890585] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.894147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.903386] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.903988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.904005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.904013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.904232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.904452] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.904461] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.904468] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.908045] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.917287] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.917852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.917869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.917876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.918096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.918315] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.918327] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.918334] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.921898] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:35.976 [2024-06-10 11:52:04.931138] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:35.976 [2024-06-10 11:52:04.931738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:35.976 [2024-06-10 11:52:04.931754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:35.976 [2024-06-10 11:52:04.931762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:35.976 [2024-06-10 11:52:04.931981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:35.976 [2024-06-10 11:52:04.932201] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:35.976 [2024-06-10 11:52:04.932210] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:35.976 [2024-06-10 11:52:04.932217] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:35.976 [2024-06-10 11:52:04.935782] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:04.945027] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:04.945588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:04.945604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:04.945612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:04.945836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:04.946056] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:04.946065] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:04.946072] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:04.949627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:04.958875] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:04.959476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:04.959492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:04.959500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:04.959725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:04.959946] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:04.959954] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:04.959962] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:04.963520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:04.972778] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:04.973484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:04.973521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:04.973531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:04.973785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:04.974011] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:04.974020] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:04.974028] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:04.977585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:04.986632] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:04.987259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:04.987277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:04.987286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:04.987506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:04.987737] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:04.987748] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:04.987755] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:04.991311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.000553] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.001163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.001180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.001188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.001407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.001627] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.001636] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.001643] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.005205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.014466] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.015157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.015196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.015206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.015450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.015681] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.015692] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.015701] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.019259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.028280] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.028862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.028881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.028889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.029109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.029329] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.029339] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.029345] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.032990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.042220] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.042819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.042837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.042845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.043064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.043284] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.043293] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.043300] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.046860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.056085] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.056653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.056674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.056682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.056901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.057121] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.057130] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.057141] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.060700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.069922] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.070525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.070541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.070550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.070780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.071003] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.071012] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.071019] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.074565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.238 [2024-06-10 11:52:05.083799] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.238 [2024-06-10 11:52:05.084400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.238 [2024-06-10 11:52:05.084416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.238 [2024-06-10 11:52:05.084424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.238 [2024-06-10 11:52:05.084643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.238 [2024-06-10 11:52:05.084868] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.238 [2024-06-10 11:52:05.084879] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.238 [2024-06-10 11:52:05.084886] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.238 [2024-06-10 11:52:05.088439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.097667] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.098368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.098406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.098416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.098655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.098888] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.098899] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.098906] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.102466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.111498] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.112122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.112145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.112153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.112373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.112593] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.112601] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.112608] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.116251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.125485] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.126093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.126110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.126118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.126338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.126557] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.126566] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.126573] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.130133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.139354] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.140005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.140043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.140054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.140292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.140515] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.140525] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.140533] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.144099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.153322] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.154012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.154049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.154060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.154298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.154526] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.154536] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.154544] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.158112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.167130] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.167775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.167813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.167825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.168067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.168290] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.168300] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.168308] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.171877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.181111] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.181771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.181808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.181819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.182057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.182281] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.182291] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.182298] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.185866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.239 [2024-06-10 11:52:05.195286] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.239 [2024-06-10 11:52:05.195990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.239 [2024-06-10 11:52:05.196029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.239 [2024-06-10 11:52:05.196039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.239 [2024-06-10 11:52:05.196278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.239 [2024-06-10 11:52:05.196501] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.239 [2024-06-10 11:52:05.196511] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.239 [2024-06-10 11:52:05.196518] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.239 [2024-06-10 11:52:05.200092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.209123] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.209766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.209804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.209816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.210058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.210282] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.210291] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.210300] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.213869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.223095] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.223757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.223796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.223808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.224048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.224272] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.224282] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.224289] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.227856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.237082] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.237716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.237754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.237766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.238006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.238230] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.238240] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.238247] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.241815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.251051] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.251626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.251644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.251657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.251883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.252103] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.252112] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.252119] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.255674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.264893] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.265496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.265513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.265520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.265745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.265966] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.265975] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.265982] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.269536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.278764] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.279375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.279391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.279399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.279618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.279846] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.279857] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.279864] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.283418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.292632] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.293313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.293351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.293362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.293601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.293837] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.293852] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.293860] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.297414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.306441] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.307119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.307158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.541 [2024-06-10 11:52:05.307168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.541 [2024-06-10 11:52:05.307407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.541 [2024-06-10 11:52:05.307630] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.541 [2024-06-10 11:52:05.307640] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.541 [2024-06-10 11:52:05.307647] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.541 [2024-06-10 11:52:05.311227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.541 [2024-06-10 11:52:05.320241] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.541 [2024-06-10 11:52:05.320950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.541 [2024-06-10 11:52:05.320988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.320998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.321237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.321460] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.321470] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.321477] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.325044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.334060] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.334769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.334807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.334817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.335056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.335280] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.335289] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.335297] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.338864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.347889] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.348542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.348580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.348591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.348843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.349068] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.349078] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.349085] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.352638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.361864] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.362520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.362558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.362568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.362821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.363047] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.363057] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.363064] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.366618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.375660] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.376334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.376372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.376383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.376621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.376858] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.376870] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.376877] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.380433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.389457] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.390141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.390178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.390189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.390432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.390656] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.390665] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.390685] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.394248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.403262] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.403981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.404019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.404030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.404268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.404492] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.404502] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.404509] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.408088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.417108] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.417802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.417840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.417852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.418091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.418315] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.418325] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.418332] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.421900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.430915] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.431620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.431658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.431677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.431922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.432146] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.432156] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.432168] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.435735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.444742] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.445324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.445361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.445372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.445611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.445848] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.542 [2024-06-10 11:52:05.445861] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.542 [2024-06-10 11:52:05.445868] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.542 [2024-06-10 11:52:05.449425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.542 [2024-06-10 11:52:05.458653] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.542 [2024-06-10 11:52:05.459324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.542 [2024-06-10 11:52:05.459361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.542 [2024-06-10 11:52:05.459372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.542 [2024-06-10 11:52:05.459611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.542 [2024-06-10 11:52:05.459848] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.543 [2024-06-10 11:52:05.459859] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.543 [2024-06-10 11:52:05.459867] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.543 [2024-06-10 11:52:05.463420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.543 [2024-06-10 11:52:05.472454] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.543 [2024-06-10 11:52:05.473114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.543 [2024-06-10 11:52:05.473153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.543 [2024-06-10 11:52:05.473164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.543 [2024-06-10 11:52:05.473404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.543 [2024-06-10 11:52:05.473628] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.543 [2024-06-10 11:52:05.473638] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.543 [2024-06-10 11:52:05.473646] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.543 [2024-06-10 11:52:05.477215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.543 [2024-06-10 11:52:05.486259] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.543 [2024-06-10 11:52:05.486955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.543 [2024-06-10 11:52:05.486997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.543 [2024-06-10 11:52:05.487008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.543 [2024-06-10 11:52:05.487246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.543 [2024-06-10 11:52:05.487470] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.543 [2024-06-10 11:52:05.487480] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.543 [2024-06-10 11:52:05.487487] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.543 [2024-06-10 11:52:05.491056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.543 [2024-06-10 11:52:05.500088] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.543 [2024-06-10 11:52:05.500677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.543 [2024-06-10 11:52:05.500719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.543 [2024-06-10 11:52:05.500730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.543 [2024-06-10 11:52:05.500969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.543 [2024-06-10 11:52:05.501192] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.543 [2024-06-10 11:52:05.501202] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.543 [2024-06-10 11:52:05.501209] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.504775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.514031] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.514605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.514623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.514631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.514861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.515083] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.832 [2024-06-10 11:52:05.515092] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.832 [2024-06-10 11:52:05.515099] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.518650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.527884] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.528449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.528465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.528473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.528697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.528922] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.832 [2024-06-10 11:52:05.528931] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.832 [2024-06-10 11:52:05.528938] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.532497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.541725] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.542404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.542442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.542453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.542699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.542923] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.832 [2024-06-10 11:52:05.542933] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.832 [2024-06-10 11:52:05.542941] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.546504] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.555531] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.556192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.556230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.556240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.556479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.556717] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.832 [2024-06-10 11:52:05.556729] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.832 [2024-06-10 11:52:05.556736] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.560291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.569511] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.570226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.570264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.570274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.570513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.570751] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.832 [2024-06-10 11:52:05.570763] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.832 [2024-06-10 11:52:05.570771] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.832 [2024-06-10 11:52:05.574329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.832 [2024-06-10 11:52:05.583347] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.832 [2024-06-10 11:52:05.584017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.832 [2024-06-10 11:52:05.584054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.832 [2024-06-10 11:52:05.584064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.832 [2024-06-10 11:52:05.584303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.832 [2024-06-10 11:52:05.584526] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.584536] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.584544] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.588115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.597348] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.598020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.598058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.598069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.598307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.598531] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.598541] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.598549] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.602116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.611354] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.612031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.612070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.612080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.612318] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.612542] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.612552] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.612559] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.616131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.625356] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.626037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.626075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.626089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.626328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.626552] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.626562] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.626569] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.630136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.639150] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.639765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.639785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.639793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.640013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.640233] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.640242] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.640249] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.643805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.653032] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.653636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.653653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.653660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.653888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.654109] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.654118] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.654125] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.657676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.666891] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.667449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.667465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.667473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.667701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.667922] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.667935] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.667943] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.671491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.680708] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.681395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.681433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.681444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.681696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.681922] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.681932] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.681939] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.685495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.694508] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.695189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.695226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.695237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.695476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.695711] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.695723] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.695731] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.699285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.708517] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.709207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.709245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.709255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.709494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.709730] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.709742] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.833 [2024-06-10 11:52:05.709749] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.833 [2024-06-10 11:52:05.713304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.833 [2024-06-10 11:52:05.722326] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.833 [2024-06-10 11:52:05.723012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.833 [2024-06-10 11:52:05.723050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.833 [2024-06-10 11:52:05.723060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.833 [2024-06-10 11:52:05.723299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.833 [2024-06-10 11:52:05.723523] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.833 [2024-06-10 11:52:05.723532] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.723540] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.727108] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.834 [2024-06-10 11:52:05.736126] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.834 [2024-06-10 11:52:05.736770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.834 [2024-06-10 11:52:05.736808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.834 [2024-06-10 11:52:05.736819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.834 [2024-06-10 11:52:05.737058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.834 [2024-06-10 11:52:05.737282] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.834 [2024-06-10 11:52:05.737291] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.737299] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.740869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.834 [2024-06-10 11:52:05.750095] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.834 [2024-06-10 11:52:05.750775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.834 [2024-06-10 11:52:05.750814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.834 [2024-06-10 11:52:05.750824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.834 [2024-06-10 11:52:05.751062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.834 [2024-06-10 11:52:05.751286] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.834 [2024-06-10 11:52:05.751296] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.751303] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.754872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.834 [2024-06-10 11:52:05.764104] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.834 [2024-06-10 11:52:05.764688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.834 [2024-06-10 11:52:05.764709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.834 [2024-06-10 11:52:05.764718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.834 [2024-06-10 11:52:05.764944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.834 [2024-06-10 11:52:05.765164] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.834 [2024-06-10 11:52:05.765174] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.765181] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.768743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.834 [2024-06-10 11:52:05.777968] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.834 [2024-06-10 11:52:05.778532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.834 [2024-06-10 11:52:05.778548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.834 [2024-06-10 11:52:05.778556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.834 [2024-06-10 11:52:05.778784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.834 [2024-06-10 11:52:05.779005] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.834 [2024-06-10 11:52:05.779015] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.779022] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.782574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:36.834 [2024-06-10 11:52:05.791794] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:36.834 [2024-06-10 11:52:05.792465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:36.834 [2024-06-10 11:52:05.792503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:36.834 [2024-06-10 11:52:05.792513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:36.834 [2024-06-10 11:52:05.792765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:36.834 [2024-06-10 11:52:05.792991] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:36.834 [2024-06-10 11:52:05.793000] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:36.834 [2024-06-10 11:52:05.793008] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:36.834 [2024-06-10 11:52:05.796560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.096 [2024-06-10 11:52:05.805798] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.096 [2024-06-10 11:52:05.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.096 [2024-06-10 11:52:05.806550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.096 [2024-06-10 11:52:05.806560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.096 [2024-06-10 11:52:05.806806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.096 [2024-06-10 11:52:05.807031] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.096 [2024-06-10 11:52:05.807041] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.096 [2024-06-10 11:52:05.807052] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.096 [2024-06-10 11:52:05.810632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.096 [2024-06-10 11:52:05.819656] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.096 [2024-06-10 11:52:05.820355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.096 [2024-06-10 11:52:05.820393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.096 [2024-06-10 11:52:05.820403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.096 [2024-06-10 11:52:05.820642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.096 [2024-06-10 11:52:05.820885] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.096 [2024-06-10 11:52:05.820897] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.096 [2024-06-10 11:52:05.820905] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.096 [2024-06-10 11:52:05.824458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.096 [2024-06-10 11:52:05.833479] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.096 [2024-06-10 11:52:05.834195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.834233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.834244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.834483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.834720] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.834731] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.834739] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.838294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.847308] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.848016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.848054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.848064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.848303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.848527] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.848537] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.848544] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.852112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.861127] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.861785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.861828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.861839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.862079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.862303] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.862313] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.862320] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.865884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.875110] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.875750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.875788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.875799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.876037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.876260] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.876271] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.876278] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.879845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.889081] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.889716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.889742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.889751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.889975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.890197] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.890206] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.890213] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.893781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.903015] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.903567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.903584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.903591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.903815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.904040] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.904048] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.904055] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.907615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.916853] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.917297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.917317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.917324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.917545] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.917773] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.917783] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.917790] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.921350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.930801] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.931363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.931380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.931387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.931607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.931832] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.931842] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.931849] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.935403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.944638] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.945227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.945243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.945251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.945470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.945695] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.945704] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.945711] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 [2024-06-10 11:52:05.949280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 [2024-06-10 11:52:05.958520] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.097 [2024-06-10 11:52:05.959147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.097 [2024-06-10 11:52:05.959164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.097 [2024-06-10 11:52:05.959172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.097 [2024-06-10 11:52:05.959391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.097 [2024-06-10 11:52:05.959611] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.097 [2024-06-10 11:52:05.959619] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.097 [2024-06-10 11:52:05.959627] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2512118 Killed "${NVMF_APP[@]}" "$@" 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:43:37.097 [2024-06-10 11:52:05.963192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.097 11:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2513824 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2513824 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:43:37.098 [2024-06-10 11:52:05.972427] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2513824 ']' 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:37.098 [2024-06-10 11:52:05.972819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:05.972836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:05.972844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:37.098 [2024-06-10 11:52:05.973063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:37.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:37.098 [2024-06-10 11:52:05.973284] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:05.973293] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:05.973300] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:37.098 11:52:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.098 [2024-06-10 11:52:05.976862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 [2024-06-10 11:52:05.986303] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:05.986736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:05.986756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:05.986764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:05.986984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:05.987204] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:05.987214] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:05.987222] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:05.990801] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 [2024-06-10 11:52:06.000250] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:06.000831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:06.000847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:06.000855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:06.001074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:06.001294] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:06.001303] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:06.001310] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:06.004871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 [2024-06-10 11:52:06.014115] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:06.014725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:06.014742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:06.014750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:06.014969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:06.015188] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:06.015198] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:06.015205] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:06.018765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 [2024-06-10 11:52:06.024294] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:37.098 [2024-06-10 11:52:06.024340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:37.098 [2024-06-10 11:52:06.028002] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:06.028591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:06.028608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:06.028616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:06.028840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:06.029061] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:06.029071] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:06.029078] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:06.032634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 [2024-06-10 11:52:06.041876] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:06.042493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:06.042509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:06.042516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:06.042747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:06.042968] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:06.042977] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:06.042984] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:06.046529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.098 EAL: No free 2048 kB hugepages reported on node 1 00:43:37.098 [2024-06-10 11:52:06.055774] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.098 [2024-06-10 11:52:06.056384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.098 [2024-06-10 11:52:06.056400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.098 [2024-06-10 11:52:06.056408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.098 [2024-06-10 11:52:06.056627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.098 [2024-06-10 11:52:06.056852] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.098 [2024-06-10 11:52:06.056861] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.098 [2024-06-10 11:52:06.056868] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.098 [2024-06-10 11:52:06.060427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.069745] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.070202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.070221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.070229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.070453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.070684] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.070697] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.070704] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.074259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.083749] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.084186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.084203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.084211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.084430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.084650] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.084659] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.084666] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.088230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.088432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:37.362 [2024-06-10 11:52:06.097687] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.098279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.098296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.098304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.098524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.098757] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.098768] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.098775] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.102324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.111577] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.112194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.112211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.112218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.112438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.112658] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.112675] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.112683] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.116243] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.125478] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.126071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.126088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.126096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.126316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.126536] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.126545] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.126552] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.130114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.139446] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.140060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.140076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.362 [2024-06-10 11:52:06.140085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.362 [2024-06-10 11:52:06.140305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.362 [2024-06-10 11:52:06.140524] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.362 [2024-06-10 11:52:06.140534] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.362 [2024-06-10 11:52:06.140541] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.362 [2024-06-10 11:52:06.144105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.362 [2024-06-10 11:52:06.152362] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:37.362 [2024-06-10 11:52:06.152393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:37.362 [2024-06-10 11:52:06.152401] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:37.362 [2024-06-10 11:52:06.152407] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:37.362 [2024-06-10 11:52:06.152412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:37.362 [2024-06-10 11:52:06.152513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:37.362 [2024-06-10 11:52:06.152639] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:37.362 [2024-06-10 11:52:06.152640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:43:37.362 [2024-06-10 11:52:06.153336] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.362 [2024-06-10 11:52:06.153892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.362 [2024-06-10 11:52:06.153910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.153922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.154143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.154364] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.154374] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.154381] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.157943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.167182] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.167667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.167690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.167698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.167918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.168138] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.168147] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.168154] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.171713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.181154] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.181782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.181800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.181808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.182027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.182247] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.182256] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.182263] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.185824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.195287] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.195759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.195776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.195784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.196004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.196224] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.196238] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.196245] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.199805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.209254] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.209842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.209859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.209867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.210087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.210307] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.210315] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.210322] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.213883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.223117] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.223565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.223583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.223591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.223817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.224038] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.224048] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.224055] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.227614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.237057] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.237630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.237646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.237653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.237877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.238097] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.238106] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.238113] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.363 [2024-06-10 11:52:06.241673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.250911] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.251347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.251363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.251371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.251590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.251815] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.251826] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.251834] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.255393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.264835] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.265286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.265303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.265310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.265529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.265754] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.265764] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.265771] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 [2024-06-10 11:52:06.269362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.363 [2024-06-10 11:52:06.278812] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.363 [2024-06-10 11:52:06.279424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.363 [2024-06-10 11:52:06.279441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.363 [2024-06-10 11:52:06.279450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.363 [2024-06-10 11:52:06.279675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.363 [2024-06-10 11:52:06.279896] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.363 [2024-06-10 11:52:06.279905] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.363 [2024-06-10 11:52:06.279912] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.363 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.364 [2024-06-10 11:52:06.283464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.364 [2024-06-10 11:52:06.286531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:37.364 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.364 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:37.364 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.364 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.364 [2024-06-10 11:52:06.292696] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.364 [2024-06-10 11:52:06.293307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.364 [2024-06-10 11:52:06.293322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.364 [2024-06-10 11:52:06.293330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.364 [2024-06-10 11:52:06.293549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.364 [2024-06-10 11:52:06.293774] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.364 [2024-06-10 11:52:06.293784] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.364 [2024-06-10 11:52:06.293791] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.364 [2024-06-10 11:52:06.297348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.364 [2024-06-10 11:52:06.306586] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.364 [2024-06-10 11:52:06.307164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.364 [2024-06-10 11:52:06.307181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.364 [2024-06-10 11:52:06.307189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.364 [2024-06-10 11:52:06.307408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.364 [2024-06-10 11:52:06.307628] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.364 [2024-06-10 11:52:06.307638] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.364 [2024-06-10 11:52:06.307645] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.364 [2024-06-10 11:52:06.311216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.364 [2024-06-10 11:52:06.320447] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.364 [2024-06-10 11:52:06.320891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.364 [2024-06-10 11:52:06.320907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.364 [2024-06-10 11:52:06.320915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.364 [2024-06-10 11:52:06.321134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.364 [2024-06-10 11:52:06.321354] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.364 [2024-06-10 11:52:06.321362] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.364 [2024-06-10 11:52:06.321373] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.364 [2024-06-10 11:52:06.324934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.625 Malloc0 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.625 [2024-06-10 11:52:06.334371] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.625 [2024-06-10 11:52:06.334841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.625 [2024-06-10 11:52:06.334858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.625 [2024-06-10 11:52:06.334865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.625 [2024-06-10 11:52:06.335085] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.625 [2024-06-10 11:52:06.335306] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.625 [2024-06-10 11:52:06.335314] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.625 [2024-06-10 11:52:06.335321] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.625 [2024-06-10 11:52:06.338885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.625 [2024-06-10 11:52:06.348327] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.625 [2024-06-10 11:52:06.348899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.625 [2024-06-10 11:52:06.348916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.625 [2024-06-10 11:52:06.348924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.625 [2024-06-10 11:52:06.349143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.625 [2024-06-10 11:52:06.349363] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.625 [2024-06-10 11:52:06.349372] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.625 [2024-06-10 11:52:06.349379] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.625 [2024-06-10 11:52:06.352938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:37.625 [2024-06-10 11:52:06.362169] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.625 [2024-06-10 11:52:06.362774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:37.625 [2024-06-10 11:52:06.362794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x87b840 with addr=10.0.0.2, port=4420 00:43:37.625 [2024-06-10 11:52:06.362802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87b840 is same with the state(5) to be set 00:43:37.625 [2024-06-10 11:52:06.363005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:37.625 [2024-06-10 11:52:06.363021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b840 (9): Bad file descriptor 00:43:37.625 [2024-06-10 11:52:06.363242] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:43:37.625 [2024-06-10 11:52:06.363252] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:43:37.625 [2024-06-10 11:52:06.363258] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:43:37.625 [2024-06-10 11:52:06.366815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.625 11:52:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2512787 00:43:37.625 [2024-06-10 11:52:06.376043] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:37.625 [2024-06-10 11:52:06.539122] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:47.624 00:43:47.624 Latency(us) 00:43:47.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:47.624 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:47.624 Verification LBA range: start 0x0 length 0x4000 00:43:47.624 Nvme1n1 : 15.02 6989.03 27.30 8701.88 0.00 8131.75 785.07 19988.48 00:43:47.624 =================================================================================================================== 00:43:47.624 Total : 6989.03 27.30 8701.88 0.00 8131.75 785.07 19988.48 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:47.624 rmmod nvme_tcp 00:43:47.624 rmmod nvme_fabrics 00:43:47.624 rmmod nvme_keyring 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2513824 ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 2513824 ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2513824' 00:43:47.624 killing process with pid 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 2513824 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:47.624 11:52:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:49.041 11:52:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:49.041 00:43:49.041 real 0m27.982s 00:43:49.041 user 1m3.896s 00:43:49.041 sys 0m7.003s 00:43:49.041 11:52:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:49.041 11:52:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:49.041 ************************************ 00:43:49.041 END TEST nvmf_bdevperf 00:43:49.041 ************************************ 00:43:49.041 11:52:17 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:43:49.041 11:52:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:49.041 11:52:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:49.041 11:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:49.041 ************************************ 00:43:49.041 START TEST nvmf_target_disconnect 00:43:49.041 ************************************ 00:43:49.041 11:52:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:43:49.303 * Looking for test storage... 00:43:49.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:43:49.303 11:52:18 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:57.453 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:57.453 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:57.453 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:57.453 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:57.453 11:52:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:57.453 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:57.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:57.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:43:57.454 00:43:57.454 --- 10.0.0.2 ping statistics --- 00:43:57.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.454 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:57.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:57.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:43:57.454 00:43:57.454 --- 10.0.0.1 ping statistics --- 00:43:57.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:57.454 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 ************************************ 00:43:57.454 START TEST nvmf_target_disconnect_tc1 00:43:57.454 ************************************ 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:57.454 EAL: No free 2048 kB hugepages reported on node 1 00:43:57.454 [2024-06-10 11:52:25.373897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:57.454 [2024-06-10 11:52:25.373979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16751d0 with addr=10.0.0.2, port=4420 00:43:57.454 [2024-06-10 11:52:25.374009] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:43:57.454 [2024-06-10 11:52:25.374020] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:57.454 [2024-06-10 11:52:25.374028] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:43:57.454 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:43:57.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:43:57.454 Initializing NVMe Controllers 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:57.454 00:43:57.454 real 0m0.114s 00:43:57.454 user 0m0.058s 00:43:57.454 sys 0m0.055s 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 ************************************ 00:43:57.454 END TEST nvmf_target_disconnect_tc1 00:43:57.454 ************************************ 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 ************************************ 00:43:57.454 START TEST nvmf_target_disconnect_tc2 00:43:57.454 ************************************ 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2519851 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2519851 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2519851 ']' 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:57.454 11:52:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.454 [2024-06-10 11:52:25.530334] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:43:57.454 [2024-06-10 11:52:25.530395] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:57.454 EAL: No free 2048 kB hugepages reported on node 1 00:43:57.454 [2024-06-10 11:52:25.618415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:57.454 [2024-06-10 11:52:25.713920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:57.454 [2024-06-10 11:52:25.713979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:57.454 [2024-06-10 11:52:25.713988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:57.454 [2024-06-10 11:52:25.713995] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:57.454 [2024-06-10 11:52:25.714001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:57.454 [2024-06-10 11:52:25.714204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:43:57.454 [2024-06-10 11:52:25.714347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:43:57.454 [2024-06-10 11:52:25.714485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:43:57.454 [2024-06-10 11:52:25.714486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 Malloc0 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 [2024-06-10 11:52:26.506256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 [2024-06-10 11:52:26.546616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2520192 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:43:57.717 11:52:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:57.717 EAL: No free 2048 kB hugepages reported on node 1 00:43:59.643 11:52:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2519851 00:43:59.643 11:52:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Read completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Write completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Write completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Write completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Write completed with error (sct=0, sc=8) 00:43:59.643 starting I/O failed 00:43:59.643 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 [2024-06-10 11:52:28.579127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Write completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 Read completed with error (sct=0, sc=8) 00:43:59.644 starting I/O failed 00:43:59.644 [2024-06-10 11:52:28.579315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:43:59.644 [2024-06-10 11:52:28.579684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.579698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.580038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.580065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.580431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.580442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.580907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.580937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.581317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.581327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.581558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.581566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.582042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.582071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.582401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.582411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.582877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.582906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.583238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.583252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.583494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.583503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.583828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.583837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.584080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.584088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.584250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.584258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.584522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.584530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.584753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.584764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.585018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.585026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.585321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.585330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.585570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.585579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.585954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.585963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.586336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.586346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.644 qpair failed and we were unable to recover it. 00:43:59.644 [2024-06-10 11:52:28.586732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.644 [2024-06-10 11:52:28.586741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.587136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.587145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.587509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.587517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.587763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.587772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.588161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.588170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.588346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.588355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.588689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.588698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.588982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.588990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.589243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.589252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.589590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.589599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.590000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.590009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.590378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.590386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.590599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.590607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.590856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.590865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.591183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.591191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.591348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.591358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.591718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.591727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.591958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.591967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.592349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.592358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.592727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.592736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.592921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.592930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.593270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.593279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.593649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.593657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.594051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.594060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.594371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.594379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.594734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.594742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.595110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.595119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.595478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.595486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.595838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.595848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.596205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.596214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.596543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.596551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.596927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.596935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.597281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.597289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.597643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.597651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.598021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.598029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.598410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.598418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.598811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.598820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.599173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.599182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.645 [2024-06-10 11:52:28.599500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.645 [2024-06-10 11:52:28.599508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.645 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.599888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.599896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.600230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.600239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.600604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.600612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.600965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.600974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.601339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.601347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.601585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.601593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.601891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.601900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.602229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.602236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.602538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.602546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.602712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.602720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.603064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.603072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.603440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.603448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.603770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.603779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.604158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.604165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.604499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.604508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.604859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.604867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.605248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.605257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.605591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.605600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.605935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.605944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.606309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.606318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.606507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.606516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.606851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.606860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.607197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.607206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.607572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.607581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.607928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.607936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.608306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.608315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.608695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.608705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.609076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.609085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.609451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.609460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.609800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.609811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.610168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.610177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.610551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.610560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.610751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.610761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.611118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.611127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.611540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.611549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.611919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.611928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.612285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.612293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.646 [2024-06-10 11:52:28.612663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.646 [2024-06-10 11:52:28.612684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.646 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.613046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.613055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.613421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.613430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.613801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.613810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.614105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.614113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.614478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.614486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.614827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.614835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.615183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.615191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.615562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.615570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.615995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.616004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.616366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.616374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.616585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.616593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.616940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.616948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.617218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.617226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.617572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.617581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.617897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.618268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.618277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.618645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.618653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.618996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.619005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.619319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.619327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.619677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.619685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.620047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.620055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.620423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.620432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.620814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.620823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.621164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.621173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.621500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.621509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.621824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.621833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.622212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.622221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.622584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.622593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.622934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.622943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.623316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.922 [2024-06-10 11:52:28.623325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.922 qpair failed and we were unable to recover it. 00:43:59.922 [2024-06-10 11:52:28.623682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.623692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.624037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.624047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.624420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.624429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.624796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.624805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.625117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.625126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.625453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.625462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.625805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.625813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.626155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.626165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.626492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.626501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.626834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.626843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.627181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.627189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.627558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.627566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.627776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.627785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.627981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.627990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.628328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.628337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.628705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.628714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.629023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.629031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.629382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.629391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.629733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.629743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.630068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.630077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.630444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.630453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.630740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.630749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.631108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.631116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.631366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.631375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.631628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.631637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.631895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.631904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.632247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.632256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.632637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.632646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.633033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.633043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.633280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.633289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.633656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.633664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.633992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.634001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.634387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.634395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.634743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.634752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.635096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.635104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.635480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.635488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.635867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.635876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.636251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.923 [2024-06-10 11:52:28.636259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.923 qpair failed and we were unable to recover it. 00:43:59.923 [2024-06-10 11:52:28.636589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.636598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.636943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.636952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.637254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.637261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.637626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.637637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.638005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.638013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.638358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.638366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.638735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.638744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.639057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.639064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.639386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.639394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.639734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.639743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.640083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.640091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.640457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.640466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.640817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.640826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.641023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.641032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.641399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.641408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.641770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.641778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.642148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.642157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.642483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.642492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.642817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.642825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.643135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.643143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.643457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.643464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.643819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.643828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.644203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.644211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.644499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.644507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.644865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.644873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.645252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.645260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.645600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.645608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.645934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.645943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.646308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.646316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.646497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.646505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.646914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.646923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.647290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.647298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.647645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.647653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.648023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.648031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.648406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.648415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.648626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.648636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.649016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.649024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.649344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.649352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.924 [2024-06-10 11:52:28.649710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.924 [2024-06-10 11:52:28.649719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.924 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.650060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.650068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.650432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.650440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.650820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.650829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.651159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.651166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.651512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.651522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.651883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.651891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.652261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.652269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.652595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.652603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.652933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.652942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.653127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.653134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.653466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.653475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.653824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.653833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.654201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.654209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.654579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.654588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.654940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.654948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.655320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.655329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.655697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.655705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.655999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.656008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.656377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.656386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.656750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.656758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.657058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.657067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.657402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.657410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.657776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.657785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.658134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.658142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.658510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.658518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.658754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.658763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.659109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.659117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.659487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.659496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.659861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.659870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.660215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.660223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.660589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.660597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.660953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.660963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.661312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.661320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.661651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.661660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.662022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.662030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.662213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.662221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.662554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.662562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.662909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.925 qpair failed and we were unable to recover it. 00:43:59.925 [2024-06-10 11:52:28.663256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.925 [2024-06-10 11:52:28.663265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.663631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.663639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.663969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.663977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.664286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.664295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.664646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.664654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.665033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.665041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.665417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.665427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.665791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.665799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.666170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.666179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.666390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.666399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.666713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.666723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.667090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.667098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.667449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.667457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.667822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.667831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.668145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.668153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.668497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.668505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.668872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.668880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.669248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.669256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.669602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.669610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.669859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.669867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.670179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.670187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.670548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.670556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.670928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.670936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.671306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.671315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.671691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.671699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.672064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.672072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.672440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.672449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.672826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.672834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.673177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.673186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.673552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.926 [2024-06-10 11:52:28.673560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.926 qpair failed and we were unable to recover it. 00:43:59.926 [2024-06-10 11:52:28.673928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.673937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.674291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.674299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.674663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.674674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.675034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.675042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.675228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.675235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.675593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.675602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.675936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.675944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.676309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.676318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.676686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.676695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.676911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.676919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.677297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.677306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.677673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.677682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.677994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.678004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.678402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.678411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.678785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.679166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.679174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.679543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.679553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.679914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.679923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.680297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.680305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.680514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.680522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.680839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.680848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.681198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.681206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.681571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.681579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.681927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.681936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.682286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.682296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.682659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.682667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.683038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.683046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.683428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.683437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.683801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.683810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.684176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.684184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.684530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.684540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.684908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.684916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.685232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.685239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.685426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.685434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.685775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.685783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.686197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.686206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.686510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.686518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.686868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.686876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.927 [2024-06-10 11:52:28.687242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.927 [2024-06-10 11:52:28.687250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.927 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.687592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.687601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.687943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.687951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.688317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.688325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.688673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.688682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.689046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.689054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.689424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.689433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.689865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.689893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.690224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.690234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.690605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.690614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.690977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.690986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.691338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.691346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.691753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.691761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.692105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.692113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.692479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.692488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.692828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.692837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.693188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.693197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.693569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.693577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.693932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.693944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.694290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.694299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.694630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.694638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.694889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.694897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.695178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.695186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.695554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.695562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.695895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.695903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.696252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.696261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.696627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.696635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.696988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.696997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.697417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.697425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.697751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.697760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.698020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.698028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.698373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.698382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.698753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.698762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.699143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.699151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.699517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.699525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.699759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.699767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.700134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.700142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.700484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.700492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.700859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.928 [2024-06-10 11:52:28.700868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.928 qpair failed and we were unable to recover it. 00:43:59.928 [2024-06-10 11:52:28.701242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.701250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.701593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.701601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.701971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.701979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.702349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.702357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.702725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.702734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.703024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.703032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.703410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.703418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.703764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.703773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.704139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.704147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.704521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.704529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.704881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.704891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.705204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.705212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.705553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.705562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.705924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.705932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.706297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.706305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.706678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.706687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.707057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.707066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.707304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.707311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.707639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.707648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.708074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.708084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.708420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.708430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.708792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.708801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.709210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.709218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.709542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.709550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.709745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.709754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.710167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.710176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.710508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.710515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.710891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.710899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.711236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.711243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.711626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.711634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.711884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.711892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.712246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.712254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.712618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.712626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.712978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.712987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.713404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.713412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.713751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.713759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.714090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.929 [2024-06-10 11:52:28.714098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.929 qpair failed and we were unable to recover it. 00:43:59.929 [2024-06-10 11:52:28.714446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.714455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.714826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.714834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.715025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.715033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.715347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.715355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.715721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.715729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.716137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.716144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.716482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.716490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.716872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.717241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.717251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.717680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.717688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.718008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.718017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.718386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.718394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.718761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.718770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.719116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.719124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.719491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.719499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.719870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.719879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.720243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.720251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.720622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.720630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.720992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.721001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.721365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.721373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.721741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.721750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.722074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.722081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.722435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.722442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.722812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.722820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.723171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.723181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.723550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.723558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.723900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.723909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.724260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.724268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.724455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.724463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.724800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.724808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.725191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.725199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.725530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.725538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.725911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.725920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.726155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.726163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.726513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.726521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.726858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.726867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.727216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.727224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.727414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.727422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.930 [2024-06-10 11:52:28.727761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.930 [2024-06-10 11:52:28.727770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.930 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.727977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.727985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.728312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.728320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.728690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.728698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.729061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.729069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.729434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.729442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.729808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.729816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.730164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.730172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.730539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.730547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.730805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.730813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.731147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.731155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.731345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.731355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.731572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.731580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.731926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.731935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.732301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.732310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.732674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.732683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.733029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.733038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.733404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.733412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.733797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.733805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.734181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.734190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.734376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.734384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.734718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.734734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.735054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.735062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.735426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.735434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.735804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.735813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.736167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.736175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.736573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.736582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.736912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.736920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.737264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.737273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.737641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.737650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.738014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.738023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.738366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.738375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.738739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.931 [2024-06-10 11:52:28.738748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.931 qpair failed and we were unable to recover it. 00:43:59.931 [2024-06-10 11:52:28.739119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.739127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.739493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.739501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.739871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.739880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.740265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.740273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.740616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.740624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.740969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.740977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.741350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.741359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.741702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.741711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.742079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.742087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.742453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.742461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.742809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.742818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.743191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.743199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.743564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.743572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.743901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.743909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.744278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.744286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.744662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.744673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.744992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.745001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.745375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.745383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.745751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.745762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.746100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.746108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.746476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.746484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.746861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.746869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.747214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.747222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.747588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.747595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.747967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.747976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.748342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.748351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.748714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.748723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.749056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.749065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.749407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.749416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.749781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.749789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.750125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.750133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.750480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.750488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.750862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.750870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.751270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.751279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.751504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.751511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.751881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.751890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.752221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.752229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.752576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.752584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.932 qpair failed and we were unable to recover it. 00:43:59.932 [2024-06-10 11:52:28.752810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.932 [2024-06-10 11:52:28.752817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.753157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.753165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.753444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.753451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.753760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.753768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.754116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.754125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.754478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.754844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.754853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.755088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.755097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.755514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.755521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.755855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.755863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.756227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.756235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.756583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.756592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.756933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.756943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.757313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.757322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.757630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.757639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.757963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.757971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.758371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.758380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.758742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.758750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.759081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.759089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.759453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.759461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.759829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.759840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.760201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.760209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.760576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.760584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.761005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.761014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.761339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.761348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.761713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.761721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.762065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.762074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.762404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.762412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.762603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.762612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.762959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.762967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.763166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.763174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.763546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.763554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.763942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.763950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.764307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.764316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.764573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.764581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.764912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.764921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.765297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.765305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.765639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.765648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.933 [2024-06-10 11:52:28.765815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.933 [2024-06-10 11:52:28.765824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.933 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.766211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.766220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.766540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.766549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.766964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.766972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.767337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.767345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.767717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.767725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.767914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.767923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.768266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.768274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.768601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.768609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.768960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.768969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.769339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.769347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.769711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.769719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.770083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.770091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.770460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.770468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.770843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.770852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.771196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.771204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.771567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.771575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.771936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.771945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.772287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.772295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.772482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.772490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.772819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.772828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.773187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.773195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.773574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.773585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.773917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.773926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.774314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.774323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.774655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.774663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.775026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.775035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.775381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.775389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.775728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.775736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.775930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.775938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.776303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.776312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.776681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.776690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.777059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.777067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.777412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.777420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.777785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.777793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.778170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.778178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.778524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.778532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.778912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.778921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.779285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.779293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.934 [2024-06-10 11:52:28.779603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.934 [2024-06-10 11:52:28.779610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.934 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.779929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.779938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.780259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.780268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.780611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.780619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.780959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.780968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.781337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.781345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.781690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.781698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.782063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.782070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.782441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.782450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.782660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.782672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.782906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.782914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.783254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.783263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.783607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.783615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.783937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.783946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.784311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.784319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.784664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.784675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.785009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.785017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.785381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.785389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.785697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.785706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.786077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.786085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.786455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.786463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.786775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.786783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.787147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.787155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.787523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.787533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.787884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.787893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.788262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.788270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.788581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.788589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.788922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.788929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.789262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.789270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.789631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.789639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.789977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.789985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.790348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.790357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.935 qpair failed and we were unable to recover it. 00:43:59.935 [2024-06-10 11:52:28.790727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.935 [2024-06-10 11:52:28.790735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.791073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.791082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.791451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.791459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.791814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.791822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.792135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.792143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.792459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.792467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.792826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.792834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.793179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.793187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.793525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.793533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.793904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.793912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.794256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.794264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.794628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.794636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.794995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.795004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.795344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.795352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.795743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.795751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.796121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.796129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.796474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.796482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.796851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.796859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.797224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.797232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.797450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.797457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.797792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.797801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.798181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.798189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.798576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.798584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.798942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.798950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.799320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.799327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.799536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.799544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.799875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.799885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.800258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.800267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.800611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.800620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.800928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.800937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.801164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.801173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.801527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.801538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.801906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.801914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.802274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.802282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.802631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.802639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.803012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.803020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.803387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.803396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.803648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.803656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.804036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.936 [2024-06-10 11:52:28.804044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.936 qpair failed and we were unable to recover it. 00:43:59.936 [2024-06-10 11:52:28.804387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.804396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.804837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.804865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.805233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.805243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.805613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.805621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.805872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.805880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.806253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.806261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.806639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.806647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.807003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.807011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.807392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.807400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.807767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.807776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.808154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.808163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.808533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.808542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.808918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.808927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.809266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.809274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.809601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.809609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.809956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.809965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.810312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.810319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.810674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.810682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.810997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.811006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.811239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.811249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.811571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.811920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.811929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.812270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.812279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.812643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.812651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.813015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.813024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.813368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.813376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.813745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.813753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.813939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.813947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.814292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.814300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.814664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.814675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.815031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.815039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.815386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.815394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.815759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.815770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.816133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.816142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.816486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.816495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.816830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.816839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.817203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.937 [2024-06-10 11:52:28.817211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.937 qpair failed and we were unable to recover it. 00:43:59.937 [2024-06-10 11:52:28.817446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.817454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.817823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.817831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.818205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.818213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.818557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.818566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.818930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.818938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.819309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.819317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.819661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.819673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.820041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.820048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.820375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.820383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.820637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.820645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.821012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.821022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.821390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.821398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.821738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.821748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.822118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.822126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.822315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.822323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.822562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.822571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.822886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.822894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.823259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.823267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.823507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.823514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.823878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.823887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.824253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.824262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.824608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.824616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.824984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.824992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.825359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.825368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.825718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.825726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.826104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.826112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.826479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.826487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.826838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.826846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.827213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.827221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.827450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.827457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.827780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.827789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.828154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.828162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.828525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.828534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.828721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.828729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.829060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.829069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.829440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.829449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.829805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.829813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.830018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.830027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.938 qpair failed and we were unable to recover it. 00:43:59.938 [2024-06-10 11:52:28.830360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.938 [2024-06-10 11:52:28.830369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.830719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.830728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.831088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.831096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.831439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.831447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.831801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.831809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.832179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.832188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.832541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.832550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.832888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.832896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.833112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.833120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.833492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.833500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.833842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.833851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.834191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.834199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.834623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.834631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.834974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.834982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.835313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.835322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.835561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.835570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.835782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.835791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.836133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.836141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.836488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.836496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.836743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.836751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.837093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.837101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.837470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.837478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.837702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.837710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.837968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.837976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.838224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.838232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.838579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.838588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.838927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.838936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.839274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.839283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.839691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.839699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.840079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.840087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.840434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.840441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.840790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.840798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.841146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.841155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.841522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.841529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.841857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.842237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.842244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.842615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.842623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.842980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.842991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.843366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.939 [2024-06-10 11:52:28.843375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.939 qpair failed and we were unable to recover it. 00:43:59.939 [2024-06-10 11:52:28.843746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.843755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.844106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.844114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.844320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.844329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.844649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.844658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.844968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.844977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.845194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.845201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.845532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.845540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.845727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.845737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.846013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.846020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.846378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.846387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.846737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.846745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.847123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.847131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.847505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.847513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.847862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.847870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.848240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.848248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.848591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.848599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.848961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.848970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.849306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.849314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.849777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.849786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.850121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.850347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.850355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.850573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.850581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.850905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.850913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.851249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.851258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.851650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.851658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.852000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.852010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.852384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.852392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.852765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.852774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.853081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.853089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.853453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.853462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.853675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.853684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.854036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.854044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.940 qpair failed and we were unable to recover it. 00:43:59.940 [2024-06-10 11:52:28.854278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.940 [2024-06-10 11:52:28.854286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.854610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.854618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.854953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.854962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.855174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.855181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.855394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.855401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.855764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.855772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.856140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.856149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.856477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.856485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.856819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.856827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.857021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.857029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.857350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.857358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.857722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.857730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.858100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.858108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.858480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.858489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.858851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.858859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.859230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.859239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.859614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.859623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.859967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.859975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.860408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.860416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.860746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.860756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.861109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.861117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.861490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.861498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.861896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.861903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.862359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.862367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.862704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.862712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.863061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.863069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.863332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.863340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.863531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.863540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.863844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.863852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.864201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.864210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.864563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.864571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.864922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.864931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.865172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.865180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.865525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.865534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.865787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.865795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.866158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.866166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.866377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.866385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.941 qpair failed and we were unable to recover it. 00:43:59.941 [2024-06-10 11:52:28.866719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.941 [2024-06-10 11:52:28.866728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.866945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.866952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.867178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.867185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.867519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.867527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.867877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.867885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.868298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.868306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.868537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.868545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.868924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.868932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.869299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.869307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.869629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.869638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.869805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.869814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.870178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.870186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.870401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.870409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.870768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.870776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.871134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.871142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.871394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.871401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.871839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.871847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.872176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.872185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.872568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.872575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.872795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.872802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.873167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.873175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.873519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.873528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.873880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.873888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.874268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.874275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.874691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.874699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.875049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.875057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.875430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.875438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.875810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.875819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.876204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.876212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.876557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.876565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.876930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.876938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.877312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.877320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.877666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.877678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.877896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.877904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.878258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.878266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.878607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.878614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.878964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.878972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.879308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.879315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.942 qpair failed and we were unable to recover it. 00:43:59.942 [2024-06-10 11:52:28.879694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:43:59.942 [2024-06-10 11:52:28.879702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:43:59.943 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.880034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.880043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.880340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.880348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.880525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.880535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.880828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.880836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.881194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.881203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.881556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.881565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.881928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.881936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.882160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.882167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.882409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.882418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.882760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.882768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.882989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.882998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.883347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.883356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.883553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.883562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.883907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.883916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.884151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.884158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.884530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.884539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.884923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.884931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.885242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.885251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.885643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.885652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.885914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.885923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.886274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.886283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.886633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.886642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.886977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.886986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.887320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.887329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.887674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.887683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.887914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.887922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.888155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.888163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.888481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.888489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.888840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.888848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.889195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.889204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.889456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.889464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.889653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.217 [2024-06-10 11:52:28.889661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.217 qpair failed and we were unable to recover it. 00:44:00.217 [2024-06-10 11:52:28.890007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.890016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.890334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.890341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.890566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.890574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.890902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.890910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.891287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.891295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.891712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.891720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.892069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.892077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.892290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.892298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.892638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.892647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.892994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.893002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.893315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.893323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.893658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.893666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.894019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.894027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.894384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.894392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.894742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.894750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.895106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.895115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.895326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.895334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.895679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.895687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.895929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.895938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.896283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.896291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.896661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.896672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.897041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.897049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.897397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.897405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.897784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.897792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.898162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.898170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.898518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.898526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.898877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.898885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.899271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.899279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.899662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.899673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.900031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.900039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.900259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.900266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.900646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.900655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.901008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.901017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.901258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.901266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.901626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.901634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.218 qpair failed and we were unable to recover it. 00:44:00.218 [2024-06-10 11:52:28.901980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.218 [2024-06-10 11:52:28.901988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.902330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.902339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.902682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.902691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.902865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.902873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.903234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.903242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.903575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.903583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.903902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.903910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.904298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.904306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.904535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.904542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.904851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.904859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.905209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.905220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.905567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.905575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.905812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.905820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.906053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.906060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.906413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.906422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.906772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.906780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.907068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.907075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.907395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.907403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.907588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.907595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.907929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.907937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.908301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.908309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.908640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.908648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.908991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.909000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.909346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.909354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.909724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.909732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.910129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.910137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.910481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.910490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.910836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.910844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.911215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.911223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.911590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.911598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.911936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.911944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.912279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.912288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.912607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.912616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.912960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.912968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.913312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.913320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.913674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.913683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.914057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.219 [2024-06-10 11:52:28.914065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.219 qpair failed and we were unable to recover it. 00:44:00.219 [2024-06-10 11:52:28.914432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.914440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.914781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.914789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.915136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.915145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.915491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.915499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.915842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.915851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.916143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.916152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.916516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.916524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.916867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.916877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.917238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.917246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.917434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.917441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.917628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.917635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.917977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.917986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.918323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.918332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.918672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.918683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.919035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.919043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.919408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.919417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.919798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.919806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.920151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.920160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.920370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.920380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.920668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.920680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.921062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.921070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.921414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.921423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.921766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.921774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.922127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.922136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.922503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.922510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.922824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.922832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.923179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.923187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.923555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.923563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.923930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.923939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.924284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.924292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.924638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.924646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.925012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.925020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.925384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.925392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.925736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.925746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.926091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.220 [2024-06-10 11:52:28.926098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.220 qpair failed and we were unable to recover it. 00:44:00.220 [2024-06-10 11:52:28.926465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.926473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.926836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.926845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.927203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.927210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.927561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.927570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.927934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.927943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.928310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.928318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.928660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.928671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.929031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.929039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.929348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.929357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.929714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.929722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.930081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.930089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.930436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.930445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.930697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.930705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.931045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.931053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.931400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.931408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.931755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.931763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.932101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.932109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.932489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.932497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.932843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.932854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.933206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.933215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.933528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.933535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.933890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.933898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.934254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.934262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.934614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.934623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.934980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.934988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.935359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.935368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.935718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.935726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.936088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.936096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.936341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.936350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.221 [2024-06-10 11:52:28.936682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.221 [2024-06-10 11:52:28.936691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.221 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.937021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.937360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.937368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.937699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.937708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.938017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.938025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.938389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.938397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.938743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.938752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.939120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.939128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.939494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.939502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.939859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.939868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.940207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.940215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.940579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.940586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.940925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.940934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.941274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.941282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.941627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.941636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.941940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.941948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.942315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.942324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.942666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.942678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.943025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.943033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.943400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.943409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.943774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.943782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.944144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.944153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.944506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.944514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.944885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.944893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.945261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.945272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.945618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.945626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.945937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.945945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.946290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.946298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.946630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.946639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.946978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.946990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.947227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.947235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.947575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.947582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.947930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.947939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.948285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.948293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.222 [2024-06-10 11:52:28.948647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.222 [2024-06-10 11:52:28.948656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.222 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.949011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.949019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.949353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.949361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.949707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.949715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.950057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.950065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.950431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.950440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.950657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.950665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.951044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.951052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.951399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.951408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.951780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.951790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.952144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.952152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.952496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.952504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.952893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.952902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.953275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.953283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.953652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.953660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.953998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.954007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.954364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.954372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.954741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.954750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.955148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.955156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.955495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.955504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.955848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.955856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.956181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.956189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.956559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.956568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.956932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.956940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.957292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.957300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.957674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.957683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.958014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.958022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.958272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.958279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.958613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.958621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.958859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.958867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.959210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.959218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.959567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.959576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.959943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.223 [2024-06-10 11:52:28.959953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.223 qpair failed and we were unable to recover it. 00:44:00.223 [2024-06-10 11:52:28.960293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.960301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.960638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.960647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.961011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.961022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.961392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.961400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.961733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.961742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.962199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.962207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.962397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.962405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.962687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.962697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.963000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.963009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.963377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.963385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.963731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.963740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.964089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.964098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.964470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.964478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.964843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.964852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.965265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.965273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.965619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.965627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.965969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.965978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.966355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.966363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.966712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.966722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.967083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.967092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.967435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.967444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.967654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.967663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.968037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.968047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.968413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.968422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.968766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.968774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.969107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.969115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.969462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.969471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.969814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.969823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.970152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.970161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.970539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.970546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.970887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.970895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.971271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.971279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.971644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.971652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.972019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.972027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.972366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.972375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.972710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.972719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.973073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.224 [2024-06-10 11:52:28.973081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.224 qpair failed and we were unable to recover it. 00:44:00.224 [2024-06-10 11:52:28.973476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.973484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.973820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.973828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.974181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.974190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.974556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.974564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.974934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.974943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.975292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.975302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.975632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.975640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.975943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.975955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.976319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.976328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.976677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.976686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.977067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.977075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.977441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.977450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.977933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.977962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.978320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.978329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.978714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.978723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.979079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.979088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.979459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.979467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.979698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.979705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.980025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.980035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.980389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.980398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.980772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.980782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.981141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.981150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.981528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.981537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.981903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.981912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.982275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.982285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.982633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.982641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.983012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.983021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.983383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.983391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.983765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.983773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.984131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.984139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.984513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.984520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.984883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.984891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.985255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.985263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.985606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.985614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.985939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.985947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.986312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.986320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.986569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.225 [2024-06-10 11:52:28.986578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.225 qpair failed and we were unable to recover it. 00:44:00.225 [2024-06-10 11:52:28.986910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.986918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.987271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.987280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.987648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.987657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.988027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.988035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.988380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.988389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.988582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.988594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.988924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.988933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.989240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.989249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.989610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.989621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.989960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.989969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.990343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.990352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.990724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.990733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.990974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.990982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.991317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.991325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.991696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.991705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.992077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.992085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.992273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.992281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.992607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.992615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.992960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.992969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.993338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.993346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.993730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.993738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.994040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.994049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.994402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.994411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.994776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.994784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.995137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.995145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.995509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.995517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.995822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.995831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.996170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.996178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.996524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.996532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.996902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.996911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.997226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.997235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.226 qpair failed and we were unable to recover it. 00:44:00.226 [2024-06-10 11:52:28.997608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.226 [2024-06-10 11:52:28.997617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.997963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.997971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.998312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.998321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.998688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.998697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.999033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.999042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.999420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.999428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:28.999794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:28.999802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.000144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.000153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.000523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.000531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.000879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.000887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.001211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.001219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.001587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.001595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.001933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.001942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.002287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.002295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.002666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.002678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.003023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.003031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.003405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.003413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.003761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.003771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.004183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.004191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.004527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.004536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.004918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.004926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.005269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.005277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.005470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.005479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.005822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.005830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.006187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.006195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.006618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.006626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.006975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.006984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.007287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.007295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.007676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.007684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.008048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.008056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.008347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.008356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.008700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.008708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.009055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.009064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.009407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.009416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.009813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.227 [2024-06-10 11:52:29.009821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.227 qpair failed and we were unable to recover it. 00:44:00.227 [2024-06-10 11:52:29.010159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.010167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.010531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.010539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.010885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.010894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.011269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.011277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.011641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.011650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.011948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.011956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.012301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.012310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.012676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.012685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.013051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.013059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.013436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.013444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.013895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.013924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.014295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.014305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.014679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.014689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.015045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.015053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.015402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.015410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.015881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.015910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.016282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.016291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.016674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.016684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.017035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.017044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.017385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.017393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.017871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.017900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.018220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.018229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.018425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.018439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.018777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.018785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.019172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.019180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.019931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.019947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.020300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.020309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.020676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.020686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.021007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.021015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.021346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.021355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.021706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.021716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.022090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.022099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.022464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.022472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.022849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.022857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.023113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.023121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.023497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.023505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.023874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.228 [2024-06-10 11:52:29.023883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.228 qpair failed and we were unable to recover it. 00:44:00.228 [2024-06-10 11:52:29.024248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.024257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.024602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.024610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.024930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.024938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.025310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.025318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.025686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.025695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.026040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.026050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.026390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.026398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.026772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.026780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.026957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.026965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.027294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.027302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.027673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.027682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.028032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.028040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.028413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.028421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.028728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.028737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.028987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.028996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.029369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.029378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.029630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.029638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.029987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.029995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.030368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.030376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.030742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.030751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.031125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.031133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.031477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.031485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.031853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.031861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.032236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.032244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.032593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.032601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.032944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.032959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.033296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.033304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.033603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.033610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.033979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.033988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.034330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.034339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.034709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.034718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.035050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.035058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.035425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.035433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.035785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.035794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.036171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.036179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.036489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.229 [2024-06-10 11:52:29.036498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.229 qpair failed and we were unable to recover it. 00:44:00.229 [2024-06-10 11:52:29.036869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.036877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.037223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.037231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.037566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.037575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.037926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.037935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.038316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.038324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.038673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.038682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.039065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.039074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.039438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.039447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.039816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.039825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.040194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.040202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.040572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.040580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.040924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.040933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.041302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.041311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.041678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.041688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.042029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.042038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.042406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.042414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.042779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.042788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.043140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.043148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.043515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.043523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.043888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.043896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.044260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.044268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.044612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.044622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.044975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.044983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.045348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.045357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.045721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.045730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.046085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.046094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.046304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.046313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.046643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.046651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.047017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.047025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.047368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.047378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.047752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.047760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.048129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.048138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.048509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.048517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.048865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.230 [2024-06-10 11:52:29.048874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.230 qpair failed and we were unable to recover it. 00:44:00.230 [2024-06-10 11:52:29.049193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.049201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.049558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.049566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.049933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.049942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.050303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.050311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.050674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.050683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.051007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.051015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.051337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.051346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.051702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.051710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.052054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.052063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.052439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.052447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.052815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.052823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.053185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.053194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.053563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.053572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.053924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.053932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.054195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.054203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.054644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.054652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.055020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.055028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.055403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.055412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.055783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.055792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.056019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.056026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.056363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.056370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.056735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.056743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.057117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.057126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.057470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.057479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.057846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.057855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.058177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.058186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.058513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.058522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.058884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.058892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.059266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.059610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.059618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.231 [2024-06-10 11:52:29.059941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.231 [2024-06-10 11:52:29.059949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.231 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.060293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.060301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.060666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.060678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.061032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.061041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.061407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.061415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.061805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.061816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.062175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.062184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.063043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.063061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.063420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.063429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.063776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.063785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.064197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.064205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.064571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.064579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.064920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.064929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.065271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.065279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.065643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.065652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.066023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.066031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.066328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.066337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.066705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.066715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.067058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.067067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.067439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.067446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.067813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.067822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.068169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.068177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.068545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.068553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.068906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.068914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.069265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.069274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.069627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.069965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.069973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.070346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.070354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.070721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.070729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.071068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.071076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.071406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.071414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.071790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.071798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.072172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.072181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.072561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.072569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.072907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.072916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.073211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.073220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.073566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.232 [2024-06-10 11:52:29.073575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.232 qpair failed and we were unable to recover it. 00:44:00.232 [2024-06-10 11:52:29.073942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.073951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.074319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.074328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.074663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.074675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.075034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.075042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.075388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.075395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.075773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.075781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.076155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.076163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.076530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.076538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.076884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.076893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.077264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.077272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.077636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.077644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.078023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.078031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.078403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.078411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.078778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.078786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.079118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.079126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.079490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.079498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.080354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.080373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.080701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.080716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.081082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.081090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.081460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.081468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.081819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.081828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.082182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.082190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.082407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.082416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.082783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.082791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.083151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.083159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.083508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.083517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.083771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.083779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.084242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.084250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.084598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.084607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.084963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.084972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.085339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.085347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.085721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.233 [2024-06-10 11:52:29.085730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.233 qpair failed and we were unable to recover it. 00:44:00.233 [2024-06-10 11:52:29.086109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.086117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.086488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.086497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.086848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.086857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.087213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.087222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.087574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.087583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.087929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.087937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.088190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.088198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.088400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.088409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.088768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.088776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.089157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.089165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.089541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.089550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.089899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.089907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.090240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.090248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.090498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.090507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.090915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.090923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.091261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.091270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.091627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.091979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.091988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.092366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.092373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.092745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.092754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.092936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.092945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.093288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.093296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.093648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.093656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.093909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.093917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.094116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.094124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.094458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.094466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.094888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.094897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.095236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.095244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.095609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.095952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.095960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.096335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.096344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.234 qpair failed and we were unable to recover it. 00:44:00.234 [2024-06-10 11:52:29.096726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.234 [2024-06-10 11:52:29.096734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.097083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.097093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.097453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.097460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.097858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.097866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.098037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.098045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.098285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.098294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.098603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.098611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.098834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.098842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.099196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.099204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.099571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.099580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.099931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.099940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.100301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.100308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.100651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.100661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.101011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.101020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.101395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.101403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.101756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.101765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.101992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.101999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.102344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.102352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.102716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.102725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.102929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.102937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.103271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.103279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.103628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.103637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.103982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.103991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.104321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.104330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.104680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.104689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.105048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.105055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.105436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.105445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.105836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.105844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.106083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.106091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.106287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.106296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.106580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.106589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.106911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.106921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.107261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.107270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.107618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.235 [2024-06-10 11:52:29.107627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.235 qpair failed and we were unable to recover it. 00:44:00.235 [2024-06-10 11:52:29.107968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.107976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.108311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.108321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.108622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.108630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.108892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.108900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.109269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.109277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.109616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.109625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.109956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.109964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.110168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.110176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.110525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.110533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.110724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.110733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.110973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.110984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.111242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.111250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.111482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.111490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.111817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.111825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.111989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.111997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.112359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.112368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.112740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.112748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.113092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.113101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.113429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.113439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.113777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.113785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.114120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.114129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.114496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.114504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.114848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.114858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.115213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.115221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.115632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.115640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.115869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.115877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.116250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.116259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.116607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.116615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.116939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.116947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.117197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.117205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.117610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.117618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.236 qpair failed and we were unable to recover it. 00:44:00.236 [2024-06-10 11:52:29.117969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.236 [2024-06-10 11:52:29.117978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.118348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.118356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.118724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.118733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.119137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.119146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.119488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.119495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.119826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.119835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.120212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.120220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.120561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.120570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.120939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.120948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.121278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.121287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.121699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.121707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.125995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.126023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.126392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.126401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.126640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.126648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.127079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.127108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.127479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.127489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.127981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.128010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.128357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.128366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.128872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.128901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.129216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.129226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.129593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.129601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.129923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.129932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.130271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.130279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.130657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.130666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.131083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.131092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.131462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.131470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.131816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.131824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.132155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.132166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.132464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.132471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.132811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.132819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.133194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.133203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.133578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.133587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.133935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.133943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.134197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.134205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.237 [2024-06-10 11:52:29.134589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.237 [2024-06-10 11:52:29.134598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.237 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.134952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.134960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.135189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.135197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.135519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.135528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.135935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.135944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.136288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.136297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.136679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.136688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.137005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.137016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.137221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.137230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.137569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.137578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.137929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.137937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.138280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.138289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.138500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.138508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.138721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.138729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.139013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.139021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.139368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.139376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.139746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.139755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.140098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.140105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.140344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.140352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.140686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.140695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.141045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.141053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.141288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.141518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.141526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.141842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.141850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.142233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.142240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.142454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.142461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.142767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.142775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.143146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.143155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.143495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.143503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.143881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.143889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.144250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.144259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.144627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.144635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.238 [2024-06-10 11:52:29.145003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.238 [2024-06-10 11:52:29.145011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.238 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.145381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.145391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.145600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.145608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.145825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.145835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.146202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.146210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.146575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.146583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.146929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.146938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.147308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.147315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.147688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.147696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.148027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.148035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.148374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.148383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.148575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.148584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.148937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.148946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.149319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.149327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.150129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.150146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.150508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.150517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.150859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.150867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.151247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.151254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.151603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.151611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.151954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.151962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.152332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.152341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.152685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.152694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.153042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.153050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.153415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.153423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.153772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.153781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.154159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.154167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.154511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.154520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.154870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.154879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.155219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.155228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.155574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.155583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.155913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.155921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.156291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.156299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.156663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.156683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.157022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.157030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.157397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.239 [2024-06-10 11:52:29.157405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.239 qpair failed and we were unable to recover it. 00:44:00.239 [2024-06-10 11:52:29.157757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.157767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.158114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.158122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.158471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.158480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.158818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.158826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.159185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.159193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.159511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.159521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.159868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.159878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.160164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.160173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.160518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.160525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.160867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.160875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.161238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.161246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.161654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.161662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.161936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.161944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.162288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.162296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.162647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.162655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.163000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.163009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.163353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.163362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.163625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.163633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.163929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.163937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.164241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.164248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.164591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.164599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.164950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.164959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.165311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.165319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.165636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.165645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.165992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.166000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.166343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.166352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.166763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.166770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.167093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.167102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.167462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.167470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.167815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.167824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.168263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.168271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.168607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.168614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.168961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.168971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.169223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.169232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.169568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.169575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.240 [2024-06-10 11:52:29.170010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.240 [2024-06-10 11:52:29.170018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.240 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.170315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.170323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.170693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.170701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.171035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.171043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.171401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.171409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.171762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.171770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.172143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.172151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.172512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.172521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.172773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.172781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.173126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.173134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.173486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.173494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.173851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.173861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.174218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.174226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.174447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.174455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.174805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.174813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.175154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.175162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.175525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.175532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.241 [2024-06-10 11:52:29.175902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.241 [2024-06-10 11:52:29.175911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.241 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.176258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.176268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.176604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.176611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.176983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.176991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.177344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.177352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.177701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.177709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.178029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.178038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.178399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.178407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.178756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.178765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.179089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.179097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.179333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.515 [2024-06-10 11:52:29.179340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.515 qpair failed and we were unable to recover it. 00:44:00.515 [2024-06-10 11:52:29.179687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.179696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.180016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.180024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.180338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.180346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.181089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.181105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.181454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.181462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.181826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.181835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.182166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.182174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.182539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.182547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.182766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.182774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.183141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.183149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.183511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.183519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.183894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.183902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.184253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.184261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.184573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.184582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.184908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.184916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.185280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.185288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.185634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.185643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.186007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.186016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.186384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.186392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.186759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.186767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.187094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.187103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.187470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.187479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.187850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.187858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.188552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.188569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.188934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.188943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.189753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.189770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.190100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.190110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.190479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.190487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.190836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.190845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.191138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.191145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.191510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.191518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.191877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.191886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.192250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.192258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.192981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.192996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.193356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.193365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.193571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.193579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.193934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.193951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.194335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.194346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.516 qpair failed and we were unable to recover it. 00:44:00.516 [2024-06-10 11:52:29.194710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.516 [2024-06-10 11:52:29.194719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.195383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.195401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.195743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.195752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.196451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.196467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.196827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.196835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.197411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.197427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.197784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.197792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.198350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.198365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.198506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.198514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.198845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.198854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.199161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.199170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.199504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.199512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.199884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.199893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.200229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.200237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.200601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.200610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.200946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.200955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.201292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.201301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.201673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.201684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.202506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.202520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.202797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.202806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.203526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.203541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.203903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.203913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.204486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.204501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.204845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.204855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.205175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.205185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.205529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.205539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.205961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.205968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.206296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.206303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.206657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.206663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.207032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.207039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.207386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.207393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.207732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.207739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.208100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.208106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.208447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.208453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.208735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.208742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.209110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.209116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.209447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.209454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.209786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.209793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.209956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.517 [2024-06-10 11:52:29.209965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.517 qpair failed and we were unable to recover it. 00:44:00.517 [2024-06-10 11:52:29.210291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.210298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.210538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.210545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.210900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.210907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.211276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.211283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.211628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.211635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.211976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.211985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.212276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.212290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.212632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.212638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.212985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.212991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.213355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.213362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.213725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.213732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.214092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.214099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.214469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.214475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.214821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.214828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.215032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.215040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.215349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.215355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.215694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.215701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.216065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.216073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.216359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.216366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.216689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.216696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.217046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.217053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.217398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.217404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.217739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.217746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.218101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.218108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.218476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.218483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.218824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.218831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.219076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.219084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.219238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.219245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.219493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.219499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.219719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.219727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.220109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.220117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.220375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.220382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.220725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.220732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.221051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.221058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.221425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.221432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.221764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.221771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.222120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.222128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.222444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.518 [2024-06-10 11:52:29.222451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.518 qpair failed and we were unable to recover it. 00:44:00.518 [2024-06-10 11:52:29.222662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.222673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.222983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.222990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.223293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.223300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.223479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.223486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.223822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.223829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.224173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.224180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.224547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.224553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.224899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.224906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.225158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.225164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.225396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.225403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.225773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.225780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.226133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.226141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.226511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.226517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.226858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.226865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.227216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.227223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.227574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.227580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.227943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.227951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.228326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.228332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.228660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.228667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.229021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.229028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.229382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.229388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.229717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.229725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.230120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.230126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.230455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.230461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.230780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.230786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.231132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.231139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.231379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.231386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.231740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.231747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.232070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.232079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.232408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.232414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.232790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.232797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.232973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.232980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.233314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.233321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.233665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.233676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.234036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.234043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.234411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.234417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.234744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.234751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.235122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.235128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.519 [2024-06-10 11:52:29.235498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.519 [2024-06-10 11:52:29.235505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.519 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.235845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.235859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.236099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.236106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.236439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.236445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.236793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.236800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.237157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.237164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.237498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.237505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.237852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.237859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.238202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.238208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.238540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.238547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.238893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.238899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.239229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.239237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.239609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.239617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.239804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.239811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.240166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.240174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.240512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.240520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.240765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.240772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.241117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.241124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.241514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.241520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.241849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.241857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.242200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.242207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.242530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.242537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.242880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.242887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.243070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.243077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.243403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.243409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.243675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.243682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.244040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.244047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.244378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.244385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.244777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.244785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.245163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.520 [2024-06-10 11:52:29.245170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.520 qpair failed and we were unable to recover it. 00:44:00.520 [2024-06-10 11:52:29.245496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.245504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.245862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.245869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.246239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.246246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.246575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.246581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.246907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.246914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.247285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.247291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.247579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.247585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.247957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.247964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.248295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.248302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.248617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.248625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.248980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.248987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.249353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.249360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.249708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.249715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.250076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.250082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.250412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.250419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.250661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.250667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.251063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.251070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.251273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.251280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.251572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.251579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.251834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.251841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.252209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.252215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.252503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.252510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.252864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.252871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.253249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.253256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.253602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.253609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.253954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.253960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.254328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.254335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.254666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.254678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.254920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.254927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.255154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.255161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.255505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.255512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.255840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.255847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.256101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.256107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.256447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.256454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.256830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.256837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.257188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.257195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.257540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.257546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.257965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.257972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.521 [2024-06-10 11:52:29.258193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.521 [2024-06-10 11:52:29.258199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.521 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.258580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.258587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.258914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.258921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.259257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.259264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.259597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.259604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.259957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.259964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.260312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.260319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.260649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.260655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.260892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.260899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.261231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.261237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.261560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.261567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.261924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.261932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.262276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.262282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.262637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.262644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.263007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.263014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.263341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.263348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.263547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.263558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.263971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.263979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.264324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.264331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.264715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.264722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.265063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.265070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.265443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.265449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.265777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.265784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.266147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.266154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.266522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.266528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.266856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.266862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.267225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.267231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.267607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.267613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.267961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.267967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.268163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.268174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.268473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.268480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.268690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.268697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.269028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.269035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.269405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.269411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.269740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.269747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.270147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.270154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.270482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.270489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.270837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.270844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.271020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.522 [2024-06-10 11:52:29.271027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.522 qpair failed and we were unable to recover it. 00:44:00.522 [2024-06-10 11:52:29.271329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.271335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.271705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.271711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.272103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.272111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.272456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.272464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.272825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.272832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.273171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.273178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.273523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.273530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.273877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.273884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.274212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.274221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.274587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.274594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.274932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.274938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.275178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.275184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.275566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.275573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.275908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.275915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.276279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.276285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.276534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.276540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.276888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.276896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.277230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.277238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.277611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.277617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.277957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.277964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.278310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.278316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.278658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.278664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.279036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.279043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.279371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.279377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.279675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.279682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.279919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.279925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.280292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.280298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.280626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.280632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.280945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.280953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.281191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.281197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.281580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.281588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.281829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.281836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.282184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.282191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.282540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.282546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.282893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.282900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.283213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.283220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.283592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.283599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.283976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.283983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.523 [2024-06-10 11:52:29.284320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.523 [2024-06-10 11:52:29.284328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.523 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.284540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.284547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.284894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.284910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.285244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.285250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.285585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.285591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.285830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.285837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.286178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.286186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.286552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.286560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.286750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.286757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.287114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.287121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.287436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.287443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.287873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.287880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.288214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.288221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.288570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.288576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.288910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.288917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.289127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.289133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.289558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.289564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.289903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.289910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.290261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.290268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.290643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.290650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.290979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.290986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.291346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.291353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.291659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.291665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.292045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.292053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.292418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.292425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.292764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.292771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.293016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.293022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.293402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.293408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.293695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.293702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.293971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.293977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.294114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.294121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.294436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.294444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.294814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.294822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.524 [2024-06-10 11:52:29.295224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.524 [2024-06-10 11:52:29.295230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.524 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.295639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.295645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.295990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.295997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.296362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.296368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.296519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.296527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.296868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.296875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.297247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.297253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.297491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.297497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.297834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.297842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.298075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.298082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.298442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.298449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.298814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.298822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.299161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.299168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.299509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.299516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.299889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.299896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.300230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.300237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.300591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.300598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.300962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.300968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.301296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.301303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.301736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.301743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.302146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.302153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.302407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.302414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.302802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.302810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.303118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.303125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.303482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.303489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.303821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.303827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.304177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.304184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.304527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.304533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.304945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.304952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.305155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.305162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.305524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.305530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.305858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.305865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.306094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.306101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.306510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.306518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.306763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.306771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.307132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.307139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.307465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.307471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.307645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.307652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.308001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.525 [2024-06-10 11:52:29.308008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.525 qpair failed and we were unable to recover it. 00:44:00.525 [2024-06-10 11:52:29.308368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.308376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.308619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.308626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.309020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.309027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.309351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.309366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.309729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.309736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.310072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.310079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.310331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.310337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.310676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.310683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.311032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.311038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.311382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.311388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.311596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.311603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.311945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.311952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.312136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.312143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.312511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.312518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.312934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.312941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.313353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.313360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.313706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.313713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.314080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.314087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.314457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.314465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.314711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.314717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.315047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.315053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.315385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.315392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.315620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.315627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.315865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.315872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.316207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.316213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.316581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.316587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.316907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.316914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.317265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.317272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.317600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.317607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.317972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.317987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.318356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.318363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.318690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.318696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.318934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.318940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.319303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.319310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.319686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.319694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.320010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.320017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.320196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.320204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.320432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.320438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.526 [2024-06-10 11:52:29.320711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.526 [2024-06-10 11:52:29.320718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.526 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.320981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.320988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.321422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.321430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.321781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.321787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.322134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.322141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.322487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.322494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.322847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.322854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.323204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.323211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.323485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.323491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.323859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.323866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.324153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.324160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.324514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.324520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.324860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.324866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.325220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.325227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.325590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.325596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.325848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.325855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.326220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.326227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.326561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.326567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.326896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.326903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.327248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.327255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.327619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.327626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.327990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.327997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.328362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.328369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.328804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.328811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.329051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.329058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.329418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.329424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.329752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.329760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.330115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.330121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.330453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.330459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.330815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.330822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.331188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.331195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.331428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.331434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.331652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.331658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.332005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.332019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.332387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.332393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.332528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.332534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.332743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.332750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.333007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.333014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.333269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.333275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.527 qpair failed and we were unable to recover it. 00:44:00.527 [2024-06-10 11:52:29.333601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.527 [2024-06-10 11:52:29.333607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.333961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.333968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.334313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.334319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.334694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.334705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.334913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.334920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.335229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.335236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.335615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.335621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.335972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.335979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.336338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.336345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.336703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.336710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.336901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.336908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.337239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.337246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.337592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.337599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.337964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.337971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.338317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.338323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.338654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.338661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.339017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.339024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.339382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.339390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.339759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.339767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.340111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.340117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.340361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.340368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.340574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.340582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.340913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.340919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.341253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.341260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.341630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.341636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.342046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.342053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.342432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.342439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.342747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.342754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.343090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.343097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.343356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.343362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.343587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.343594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.343824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.343831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.344207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.344213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.344520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.344527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.344888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.344894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.345231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.345238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.345584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.345590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.345931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.345938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.346299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.528 [2024-06-10 11:52:29.346306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.528 qpair failed and we were unable to recover it. 00:44:00.528 [2024-06-10 11:52:29.346683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.346689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.346914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.346921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.347269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.347276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.347609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.347616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.347847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.347856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.348073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.348080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.348285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.348291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.348516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.348522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.348872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.348879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.349222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.349229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.349575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.349582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.349925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.349933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.350305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.350313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.350683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.350690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.351040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.351046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.351279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.351286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.351517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.351524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.351870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.351877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.352238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.352244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.352585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.352591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.352824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.352830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.353038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.353044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.353392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.353399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.353630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.353637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.353984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.353992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.354348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.354355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.354561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.354567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.354897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.354904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.355159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.355165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.355486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.355494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.355843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.355851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.356191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.356199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.356545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.529 [2024-06-10 11:52:29.356551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.529 qpair failed and we were unable to recover it. 00:44:00.529 [2024-06-10 11:52:29.356901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.356908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.357273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.357287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.357541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.357548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.357795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.357802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.358149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.358156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.358486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.358493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.358845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.358852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.359229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.359236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.359570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.359576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.359901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.359908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.360147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.360155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.360443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.360452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.360761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.360769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.361192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.361198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.361542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.361549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.361762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.361769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.362129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.362135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.362474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.362480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.362844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.362851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.363185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.363191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.363529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.363535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.363867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.363874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.364237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.364244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.364572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.364579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.364960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.364967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.365255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.365262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.365598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.365605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.365950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.365957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.366211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.366428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.366435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.366775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.366782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.367015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.367022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.367394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.367401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.367735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.367742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.368050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.368056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.368412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.368419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.368783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.368789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.369238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.369245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.530 [2024-06-10 11:52:29.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.530 [2024-06-10 11:52:29.369477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.530 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.369829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.369837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.370191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.370199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.370549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.370556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.370965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.370971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.371345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.371351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.371679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.371686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.372049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.372055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.372348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.372355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.372687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.372694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.373058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.373065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.373418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.373424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.373757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.373765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.374115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.374123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.374461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.374468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.374827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.374834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.375192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.375198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.375530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.375536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.375883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.375889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.376117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.376123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.376515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.376522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.376878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.376885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.377260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.377267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.377595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.377601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.377800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.377807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.378091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.378098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.378500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.378507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.378843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.378850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.379164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.379170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.379543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.379549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.379779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.379786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.380117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.380123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.380378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.380385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.380804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.380812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.381148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.381155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.381543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.381551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.381811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.381818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.382205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.382211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.382566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.531 [2024-06-10 11:52:29.382573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.531 qpair failed and we were unable to recover it. 00:44:00.531 [2024-06-10 11:52:29.382947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.382954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.383284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.383291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.383720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.383726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.384022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.384029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.384209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.384216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.384553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.384561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.384905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.384912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.385125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.385132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.385477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.385484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.385859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.385866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.386203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.386209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.386545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.386552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.386857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.386864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.387198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.387205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.387458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.387466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.387817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.387824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.388160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.388166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.388571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.388578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.388915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.388923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.389287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.389293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.389620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.389627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.389963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.389970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.390342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.390348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.390589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.390596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.390944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.390951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.391290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.391297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.391645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.391652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.391845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.391853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.392207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.392214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.392583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.392589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.392940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.392946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.393311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.393318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.393665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.393676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.394053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.394060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.394386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.394393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.394633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.394639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.394977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.394985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.395361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.395368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.532 qpair failed and we were unable to recover it. 00:44:00.532 [2024-06-10 11:52:29.395696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.532 [2024-06-10 11:52:29.395703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.396020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.396026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.396351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.396357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.396710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.396717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.397125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.397132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.397460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.397466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.397801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.397808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.398172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.398178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.398515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.398521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.398865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.398872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.399089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.399096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.399414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.399420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.399745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.399753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.400100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.400108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.400450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.400457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.400865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.400871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.401245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.401255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.401614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.401622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.401796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.401804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.402125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.402132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.402515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.402522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.402890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.402897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.403131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.403137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.403489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.403496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.403832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.403838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.404200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.404207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.404579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.404585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.404926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.404934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.405267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.405273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.405608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.405614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.405981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.405989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.406351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.406358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.533 [2024-06-10 11:52:29.406692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.533 [2024-06-10 11:52:29.406700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.533 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.406937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.406943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.407294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.407300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.407654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.407660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.407851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.407858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.408212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.408220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.408547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.408553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.408923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.408932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.409295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.409301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.409523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.409530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.409795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.409802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.410044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.410052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.410418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.410426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.410798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.410806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.411142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.411150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.411513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.411520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.411672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.411679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.412044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.412051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.412376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.412385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.412745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.412752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.413090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.413096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.413437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.413444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.413820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.413827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.414205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.414212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.414501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.414511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.414868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.414875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.415230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.415237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.415610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.415617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.415958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.415965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.416382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.416388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.416742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.416750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.416860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.416867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.417222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.417229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.417563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.534 [2024-06-10 11:52:29.417570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.534 qpair failed and we were unable to recover it. 00:44:00.534 [2024-06-10 11:52:29.417825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.417831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.418207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.418221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.418491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.418498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.418846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.418854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.419203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.419210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.419537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.419544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.419895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.419903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.420191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.420197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.420561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.420567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.421008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.421014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.421393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.421399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.421803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.421811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.422182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.422189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.422556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.422562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.422804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.422811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.423120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.423126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.423468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.423475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.423810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.423818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.424198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.424205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.424504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.424511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.424695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.424702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.425056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.425063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.425467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.425474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.425720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.425727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.426050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.426056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.426394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.426401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.426763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.426770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.427035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.427043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.427390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.535 [2024-06-10 11:52:29.427397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.535 qpair failed and we were unable to recover it. 00:44:00.535 [2024-06-10 11:52:29.427784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.427790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.428158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.428165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.428510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.428518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.428765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.428772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.429134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.429141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.429492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.429499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.429822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.429829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.430176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.430183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.430519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.430526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.430882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.430888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.431105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.431112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.431323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.431330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.431681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.431688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.431916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.431922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.432328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.432335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.432667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.432678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.433050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.433057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.433392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.433405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.433788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.433795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.434147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.434153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.434494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.434501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.434857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.434864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.435234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.435242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.435414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.435420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.435604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.435611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.436010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.436017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.436359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.436365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.436730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.436737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.437004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.437012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.437345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.536 [2024-06-10 11:52:29.437352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.536 qpair failed and we were unable to recover it. 00:44:00.536 [2024-06-10 11:52:29.437595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.437602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.438000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.438006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.438342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.438349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.438704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.438711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.439088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.439094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.439428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.439435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.439749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.439756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.440110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.440116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.440449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.440456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.440772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.440778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.441145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.441151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.441460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.441467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.441820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.441827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.442089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.442096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.442439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.442445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.442891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.442898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.443247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.443254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.443599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.443606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.444014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.444020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.444371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.444377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.444722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.444729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.444978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.444985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.445335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.445341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.445522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.445529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.445873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.445880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.446215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.446222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.446578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.446584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.537 [2024-06-10 11:52:29.446969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.537 [2024-06-10 11:52:29.446976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.537 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.447324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.447331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.447622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.447629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.447979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.447986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.448351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.448358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.448712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.448719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.448906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.448913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.449263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.449269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.449626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.449632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.449990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.449997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.450324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.450330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.450611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.450619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.450955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.450962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.451301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.451308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.451675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.451682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.451962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.451970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.452313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.452320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.452653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.452660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.452850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.452857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.453202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.453210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.453550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.453557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.453853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.453860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.454206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.454212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.454543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.454549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.454895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.454902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.455251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.538 [2024-06-10 11:52:29.455258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.538 qpair failed and we were unable to recover it. 00:44:00.538 [2024-06-10 11:52:29.455461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.455467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.455807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.455814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.456226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.456234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.456610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.456617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.456842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.456849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.457108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.457115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.457362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.457369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.457733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.457741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.458084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.458090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.458342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.458348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.458598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.458604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.458716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.458723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.458915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.458922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.459260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.459267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.459607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.459621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.459876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.459883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.460226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.460233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.460596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.460603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.461005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.461012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.461311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.461318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.461538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.461545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.539 qpair failed and we were unable to recover it. 00:44:00.539 [2024-06-10 11:52:29.461904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.539 [2024-06-10 11:52:29.461917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.462283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.462289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.462628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.462634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.462924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.462931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.463292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.463299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.463642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.463649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.464009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.464015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.464344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.464350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.464517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.464524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.464883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.464890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.465230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.465236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.465564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.465572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.465941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.465948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.466285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.466292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.466471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.466478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.466805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.466812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.467145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.467153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.467402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.467408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.467697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.467704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.468059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.540 [2024-06-10 11:52:29.468066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.540 qpair failed and we were unable to recover it. 00:44:00.540 [2024-06-10 11:52:29.468403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.468409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.468744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.468751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.469131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.469138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.469473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.469479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.469711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.469718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.470038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.470045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.470392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.470400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.470760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.470766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.471124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.471131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.471555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.471561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.471936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.471942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.472299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.472306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.472522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.472529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.472662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.472672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.473063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.473069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.473325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.473332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.473578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.473584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.541 [2024-06-10 11:52:29.473801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.541 [2024-06-10 11:52:29.473808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.541 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.474183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.474191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.474436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.474443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.474845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.474852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.475138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.475145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.475395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.475403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.475611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.475618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.820 qpair failed and we were unable to recover it. 00:44:00.820 [2024-06-10 11:52:29.475995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.820 [2024-06-10 11:52:29.476004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.476089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.476096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.476374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.476382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.476728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.476735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.477090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.477097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.477315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.477322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.477680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.477687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.478029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.478036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.478420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.478427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.478773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.478780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.479179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.479186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.479512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.479518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.479727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.479736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.480096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.480103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.480444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.480451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.480806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.480813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.481131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.481137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.481473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.481480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.481753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.481760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.481931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.481948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.482288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.482295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.482654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.482662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.483049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.483057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.483401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.483408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.483738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.483745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.484152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.484159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.484516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.484522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.484889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.484896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.485241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.485248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.485636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.485642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.485997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.486004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.486248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.486255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.486628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.486635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.487034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.487041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.487365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.487372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.487782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.487789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.488103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.488109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.488297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.821 [2024-06-10 11:52:29.488304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.821 qpair failed and we were unable to recover it. 00:44:00.821 [2024-06-10 11:52:29.488639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.488647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.489014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.489020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.489424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.489432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.489699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.489706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.490077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.490084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.490299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.490306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.490651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.490659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.490850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.490858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.491172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.491180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.491502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.491509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.491766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.491773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.492124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.492130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.492493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.492500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.492878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.492884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.493185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.493192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.493553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.493561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.493937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.493945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.494328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.494335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.494593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.494600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.494918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.494926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.495238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.495245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.495603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.495611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.495797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.495804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.496231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.496237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.496592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.496599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.496897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.496904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.497270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.497277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.497625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.497632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.497986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.497992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.498224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.498231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.498614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.498621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.499013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.499021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.499296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.499304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.499676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.499684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.500079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.500086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.500317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.500325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.500698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.500705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.501034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.501041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.822 [2024-06-10 11:52:29.501228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.822 [2024-06-10 11:52:29.501235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.822 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.501534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.501541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.501782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.501789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.502104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.502111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.502357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.502365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.502720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.502727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.502859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.502865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.503114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.503121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.503448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.503455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.503706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.503713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.504093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.504100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.504442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.504449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.504823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.504830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.505173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.505180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.505542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.505549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.505911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.505917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.506293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.506300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.506648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.506655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.507008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.507015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.507381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.507388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.507645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.507652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.508010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.508017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.508352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.508360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.508737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.508744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.509086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.509093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.509439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.509445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.509856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.509863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.510102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.510108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.510490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.510496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.510680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.510688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.511023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.511030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.511401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.511408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.511641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.511648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.512000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.512007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.512416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.512423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.512758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.512765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.513106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.513113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.513444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.513451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.823 qpair failed and we were unable to recover it. 00:44:00.823 [2024-06-10 11:52:29.513621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.823 [2024-06-10 11:52:29.513631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.513989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.513995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.514356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.514362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.514604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.514611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.514833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.514840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.515258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.515266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.515639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.515647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.515940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.515948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.516197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.516203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.516540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.516547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.516910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.516917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.517264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.517270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.517600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.517607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.517940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.517947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.518293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.518300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.518489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.518496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.518852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.518859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.519193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.519199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.519554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.519560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.519901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.519907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.520235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.520242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.520608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.520616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.520956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.520963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.521210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.521217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.521564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.521571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.521921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.521928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.522271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.522279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.522627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.522634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.522863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.522871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.523245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.523252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.523633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.523640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.523994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.524001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.524391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.524399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.524766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.524774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.525124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.525131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.525478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.525484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.525820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.525827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.526014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.526021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.824 [2024-06-10 11:52:29.526334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.824 [2024-06-10 11:52:29.526341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.824 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.526737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.526743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.527140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.527147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.527311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.527318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.527659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.527666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.528025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.528031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.528393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.528399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.528765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.528772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.529111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.529120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.529469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.529482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.825 [2024-06-10 11:52:29.529830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.825 [2024-06-10 11:52:29.529837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.825 qpair failed and we were unable to recover it. 00:44:00.826 [2024-06-10 11:52:29.530034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.826 [2024-06-10 11:52:29.530041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.826 qpair failed and we were unable to recover it. 00:44:00.826 [2024-06-10 11:52:29.530395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.826 [2024-06-10 11:52:29.530401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.826 qpair failed and we were unable to recover it. 00:44:00.826 [2024-06-10 11:52:29.530747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.826 [2024-06-10 11:52:29.530755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.826 qpair failed and we were unable to recover it. 00:44:00.826 [2024-06-10 11:52:29.531104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.826 [2024-06-10 11:52:29.531111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.826 qpair failed and we were unable to recover it. 00:44:00.826 [2024-06-10 11:52:29.531478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.826 [2024-06-10 11:52:29.531486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.826 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.531857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.531864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.532194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.532200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.532553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.532568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.532922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.532929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.533264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.533270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.533623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.533630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.533938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.533945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.534295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.534303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.534551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.534557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.534926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.534933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.535292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.535298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.535524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.535532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.535789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.535796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.536134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.536140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.536486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.536493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.536742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.536749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.537089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.537096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.537338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.537346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.537581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.537588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.537945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.537952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.538287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.538293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.538681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.538689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.539107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.539113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.539472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.539480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.539823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.539830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.540174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.540181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.540334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.540342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.540682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.540689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.541053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.541059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.541416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.541432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.541778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.541786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.542121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.542127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.542473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.542481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.542831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.542837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.543241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.543247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.543594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.543602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.543953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.543959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.827 [2024-06-10 11:52:29.544284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.827 [2024-06-10 11:52:29.544290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.827 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.544495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.544503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.544881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.544888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.545251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.545258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.545586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.545594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.545865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.545872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.546240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.546246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.546655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.546662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.547051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.547057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.547386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.547393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.547763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.547771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.547970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.547978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.548369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.548375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.548716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.548723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.549049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.549056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.549427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.549433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.549764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.549771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.550133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.550140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.550374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.550381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.550565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.550572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.550900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.550907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.551266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.551273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.551687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.551694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.552031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.552039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.552342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.552350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.552684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.552691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.553039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.553046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.553397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.553403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.553729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.553737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.554068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.554075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.554401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.554409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.554785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.554791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.555141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.555148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.555382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.555388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.555758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.555765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.556092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.556101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.556456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.556463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.556648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.828 [2024-06-10 11:52:29.556656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.828 qpair failed and we were unable to recover it. 00:44:00.828 [2024-06-10 11:52:29.557001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.557008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.557323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.557330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.557542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.557549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.557793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.557800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.558171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.558177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.558505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.558513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.558858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.558865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.559192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.559198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.559549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.559556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.559901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.559908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.560236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.560243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.560617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.560624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.560955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.560963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.561340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.561347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.561676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.561684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.561871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.561878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.562333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.562340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.562692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.563043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.563049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.563374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.563381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.563695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.563702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.563969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.563976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.564327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.564333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.564657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.564663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.565021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.565037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.565277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.565284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.565539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.565546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.565879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.565886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.566069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.566076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.566395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.566402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.566741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.566748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.567105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.567112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.567481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.567487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.567828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.567835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.568198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.568205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.568455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.568462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.568840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.568848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.569191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.569198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.829 qpair failed and we were unable to recover it. 00:44:00.829 [2024-06-10 11:52:29.569530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.829 [2024-06-10 11:52:29.569536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.569771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.569778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.570144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.570151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.570398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.570406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.571308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.571324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.571654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.571662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.571992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.571999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.572337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.572343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.572676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.572683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.573003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.573010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.573330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.573337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.573691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.573699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.574045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.574052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.574378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.574385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.574727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.574734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.575102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.575108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.575291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.575299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.575603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.575610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.575964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.575977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.576340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.576347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.576682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.576689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.577016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.577023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.577366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.577372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.577731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.577738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.578048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.578056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.578400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.578406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.578785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.578793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.579147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.579162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.579506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.579512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.579837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.579844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.580191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.580199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.580496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.580504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.580844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.580851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.581200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.581208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.581573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.581580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.581934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.581941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.582127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.582135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.582386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.582394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.830 [2024-06-10 11:52:29.582724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.830 [2024-06-10 11:52:29.582732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.830 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.583087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.583096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.583443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.583450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.583617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.583624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.583976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.583983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.584349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.584356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.584655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.584662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.585029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.585037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.585407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.585415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.585776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.585784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.586040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.586048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.586389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.586397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.586738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.586746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.587110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.587118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.587330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.587338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.587691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.587699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.588041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.588049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.588293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.588300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.588655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.588663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.588853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.588861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.589229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.589238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.589605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.589612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.589968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.589976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.590314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.590321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.590690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.590697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.591072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.591080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.591302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.591310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.591623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.591630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.591960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.591967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.592308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.592316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.592680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.592688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.592906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.592913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.593260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.831 [2024-06-10 11:52:29.593267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.831 qpair failed and we were unable to recover it. 00:44:00.831 [2024-06-10 11:52:29.593631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.593639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.593997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.594006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.594360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.594367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.594732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.594740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.595034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.595042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.595401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.595409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.595771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.595780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.596123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.596130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.596375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.596384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.596732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.596739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.597137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.597144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.597511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.597518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.597678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.597685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.597892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.597900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.598321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.598329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.598675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.598683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.599012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.599019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.599368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.599376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.599723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.599731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.600108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.600116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.600450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.600457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.600763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.600770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.601122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.601129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.601478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.601485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.601931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.601938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.602261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.602267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.602622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.602629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.602894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.602901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.603245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.603252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.603412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.603419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.603781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.603788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.604010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.604016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.604368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.604374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.604607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.604614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.604834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.604841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.605185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.605194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.605523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.605529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.605865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.832 [2024-06-10 11:52:29.605872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.832 qpair failed and we were unable to recover it. 00:44:00.832 [2024-06-10 11:52:29.606224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.606230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.606396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.606403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.606786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.606793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.607163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.607171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.607522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.607528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.607860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.607867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.608218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.608224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.608568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.608575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.608936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.608943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.609300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.609307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.609676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.609683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.610011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.610018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.610269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.610275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.610632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.610639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.610973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.610979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.611319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.611326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.611682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.611690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.611888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.611894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.612281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.612288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.612545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.612551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.612645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.612650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.612994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.613001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.613278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.613286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.613661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.613668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.614022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.614030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.614379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.614386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.614714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.614720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.615097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.615104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.615319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.615326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.615678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.615685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.616025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.616031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.616362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.616369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.616728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.616735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.617105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.617112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.617445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.617452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.617747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.617753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.618090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.618096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.618455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.618463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.833 [2024-06-10 11:52:29.618808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.833 [2024-06-10 11:52:29.618815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.833 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.618994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.619000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.619326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.619332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.619538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.619545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.619909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.619916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.620245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.620252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.620604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.620610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.620961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.620968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.621298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.621304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.621672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.621679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.621917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.621923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.622280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.622288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.622593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.622600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.622932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.622946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.623339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.623345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.623678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.623685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.624025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.624031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.624381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.624388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.624738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.624745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.625087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.625093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.625291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.625298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.625660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.625666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.625874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.625881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.626208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.626214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.626550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.626557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.626907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.626914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.627288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.627294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.627682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.627689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.627921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.627927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.628313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.628319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.628514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.628521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.628909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.628917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.629323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.629330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.629521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.629527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.629941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.629947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.630277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.630285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.630598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.630604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.630955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.630962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.834 qpair failed and we were unable to recover it. 00:44:00.834 [2024-06-10 11:52:29.631319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.834 [2024-06-10 11:52:29.631326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.631465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.631473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.631712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.631719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.632057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.632063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.632337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.632344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.632676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.632682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.633034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.633041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.633272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.633279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.633587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.633594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.633937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.633944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.634278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.634285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.634632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.634639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.634981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.634989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.635289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.635297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.635642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.635649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.636046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.636054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.636307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.636314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.636682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.636690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.637038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.637044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.637226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.637234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.637616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.637624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.637965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.637972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.638291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.638298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.638635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.638641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.638954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.638961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.639334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.639340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.639668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.639679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.640018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.640025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.640428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.640435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.640777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.640784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.641027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.641033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.641283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.641289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.641632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.641639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.642027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.835 [2024-06-10 11:52:29.642034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.835 qpair failed and we were unable to recover it. 00:44:00.835 [2024-06-10 11:52:29.642374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.642380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.642731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.642738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.643092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.643098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.643427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.643433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.643774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.643782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.644186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.644192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.644531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.644538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.644909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.644918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.645252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.645259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.645610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.645617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.645957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.645964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.646293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.646299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.646649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.646655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.647084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.647091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.647458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.647464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.647873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.647900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.648272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.648281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.648638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.648645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.648882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.648889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.649225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.649232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.649562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.649569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.650019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.650026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.650357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.650365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.650740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.650747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.651096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.651103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.651474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.651481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.651831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.651838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.652178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.652184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.652422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.652429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.652770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.652778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.653151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.653158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.653566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.653572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.653917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.653923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.654245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.654252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.654626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.654976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.654984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.655329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.836 [2024-06-10 11:52:29.655336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.836 qpair failed and we were unable to recover it. 00:44:00.836 [2024-06-10 11:52:29.655672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.655679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.656022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.656029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.656229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.656238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.656598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.656606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.656817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.656824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.657191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.657199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.657451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.657458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.657695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.657703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.657933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.657939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.658315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.658322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.658692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.658701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.658945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.658952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.659285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.659291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.659530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.659536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.659903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.659910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.660238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.660244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.660487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.660493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.660841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.660849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.661181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.661187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.661536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.661543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.661889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.661895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.662263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.662270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.662603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.662610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.662985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.662992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.663339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.663346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.663763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.663770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.664158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.664165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.664516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.664522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.664874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.664881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.665303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.665310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.665689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.665697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.666050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.666057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.666244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.666252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.666598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.666604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.666998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.667005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.667354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.667361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.667721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.667728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.668080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.668086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.837 qpair failed and we were unable to recover it. 00:44:00.837 [2024-06-10 11:52:29.668486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.837 [2024-06-10 11:52:29.668492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.668711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.668718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.668953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.669319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.669326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.669698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.669704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.670069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.670076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.670420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.670426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.670756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.670763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.671132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.671138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.671471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.671477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.671798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.671805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.672218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.672224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.672560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.672569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.672930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.672937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.673268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.673275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.673646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.673653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.673988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.673996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.674341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.674348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.674697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.674705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.675039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.675046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.675451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.675457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.675698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.675706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.675963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.675969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.676161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.676168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.676516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.676523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.676873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.676880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.677227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.677233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.677565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.677572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.677819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.677826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.678173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.678179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.678507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.678513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.678833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.678840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.679055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.679061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.679420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.679427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.679769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.679775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.680183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.680190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.680428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.680435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.680778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.680785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.838 [2024-06-10 11:52:29.681111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.838 [2024-06-10 11:52:29.681119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.838 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.681492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.681499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.681873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.681880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.682210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.682217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.682571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.682578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.682924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.682931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.683269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.683276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.683660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.683667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.683856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.683864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.684195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.684211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.684580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.684587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.684930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.684938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.685283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.685289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.685484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.685490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.685748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.685757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.686113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.686119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.686445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.686452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.686809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.686816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.687186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.687192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.687520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.687526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.687876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.687883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.688251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.688258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.688584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.688592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.688941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.688948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.689232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.689240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.689587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.689595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.689965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.689973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.690402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.690409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.690761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.690768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.691126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.691134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.691474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.691481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.691851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.691858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.692210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.692217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.692576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.692583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.692923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.692930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.693296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.693303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.693717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.693724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.693957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.693964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.694335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.839 [2024-06-10 11:52:29.694341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.839 qpair failed and we were unable to recover it. 00:44:00.839 [2024-06-10 11:52:29.694536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.694542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.694722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.694729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.695006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.695013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.695425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.695432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.695769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.695777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.696161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.696167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.696537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.696543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.696774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.696781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.697130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.697137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.697536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.697543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.697815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.697822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.698178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.698184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.698545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.698551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.698896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.698903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.699238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.699244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.699499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.699506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.699691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.699699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.700004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.700011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.700380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.700387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.700750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.700757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.701134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.701140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.701550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.701557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.701877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.701885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.702218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.702224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.702556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.702562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.702922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.702938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.703302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.703309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.703636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.703643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.703946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.703953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.704218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.704224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.704601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.704609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.704953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.840 [2024-06-10 11:52:29.704960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.840 qpair failed and we were unable to recover it. 00:44:00.840 [2024-06-10 11:52:29.705285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.705292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.705642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.705649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.706021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.706029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.706223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.706231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.706592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.706599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.706929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.706936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.707335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.707343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.707681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.707688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.707964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.707971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.708254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.708260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.708604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.708611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.708946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.708953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.709319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.709326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.709682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.709689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.710020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.710027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.710276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.710283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.710751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.710758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.711104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.711110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.711475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.711482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.711839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.711847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.712164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.712170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.712502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.712508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.712925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.712932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.713268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.713276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.713471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.713479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.713832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.713838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.714174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.714180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.714412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.714418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.714693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.714700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.715072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.715079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.715453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.715460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.715804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.715811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.716231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.716238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.716592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.716599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.716944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.716951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.717305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.717312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.717662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.717668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.841 [2024-06-10 11:52:29.718015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.841 [2024-06-10 11:52:29.718022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.841 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.718393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.718407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.718800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.718807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.719153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.719519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.719532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.719814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.719820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.720041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.720048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.720331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.720337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.720679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.720687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.721041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.721048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.721461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.721468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.721827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.721834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.722183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.722189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.722442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.722449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.722825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.722832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.723088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.723095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.723444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.723451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.723829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.723836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.724199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.724205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.724541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.724547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.724816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.724823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.725140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.725147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.725526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.725532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.725776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.725783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.726095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.726101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.726451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.726458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.726832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.726840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.727189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.727195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.727428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.727434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.727706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.727713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.727962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.727969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.728203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.728209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.728558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.728564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.729018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.729025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.729357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.729364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.729728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.729735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.729837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.729844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.730029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.730036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.730360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.730367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.730755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.730762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.731000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.842 [2024-06-10 11:52:29.731007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.842 qpair failed and we were unable to recover it. 00:44:00.842 [2024-06-10 11:52:29.731139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.731146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.731508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.731514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.731882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.731889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.732251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.732257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.732436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.732443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.732758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.732765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.733146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.733152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.733493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.733499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.733867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.733874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.734192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.734199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.734543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.734549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.734796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.734803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.735085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.735091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.735230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.735236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.735388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.735395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.735719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.735727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.736081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.736088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.736427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.736434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.736763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.736770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.737114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.737121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.737452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.737458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.737769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.737776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.738100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.738107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.738332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.738340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.738713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.738720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.739065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.739073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.739418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.739425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.739684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.739691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.739868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.739875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.740230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.740236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.740616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.740622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.740981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.740988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.741349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.741355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.741604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.741611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.741988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.741995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.742328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.742334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.742698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.742705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.743036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.743042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.743391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.743398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.743781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.743788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.744006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.744013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.843 [2024-06-10 11:52:29.744389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.843 [2024-06-10 11:52:29.744395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.843 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.744752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.744760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.745049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.745055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.745323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.745329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.745674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.745681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.746059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.746065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.746429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.746435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.746694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.746701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.746955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.746962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.747291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.747298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.747631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.747637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.747970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.747977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.748341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.748348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.748762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.748768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.749116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.749124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.749476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.749482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.749852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.749859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.750204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.750211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.750559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.750566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.750937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.750944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.751320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.751327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.751652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.751658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.752012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.752019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.752369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.752376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.752739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.752748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.753040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.753047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.753397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.753404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.753717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.753724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.754058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.754065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.754441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.754447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.754783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.754790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.755127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.755133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.755462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.755468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.755839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.755846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.756173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.756180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.844 [2024-06-10 11:52:29.756537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.844 [2024-06-10 11:52:29.756544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.844 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.756884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.756891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.757223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.757229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.757586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.757592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.757767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.757775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.758036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.758043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.758297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.758304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.758652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.758659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.759002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.759009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.759180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.759187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.759525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.759532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.759859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.759865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.760213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.760219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.760548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.760555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.760907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.761252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.761258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.761572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.761579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.761927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.761933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.762262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.762268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.762541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.762547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.762949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.762956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.763337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.763344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.763676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.763682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.763891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.763898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.764245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.764252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.764436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.764442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.764784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.764791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.765141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.765148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.765492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.765498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.765843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.765851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.766198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.766205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.766557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.766563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.766910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.766917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.767259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.767266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.767634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.767641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.767802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.767809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.768125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.768131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.768501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.768508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.768855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.768862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.769206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.769213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.769487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.769494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.769794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.769801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.769987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.769994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.770234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.845 [2024-06-10 11:52:29.770241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.845 qpair failed and we were unable to recover it. 00:44:00.845 [2024-06-10 11:52:29.770595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.770602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.770957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.770963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.771252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.771260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.771597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.771604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.771940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.771953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.772268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.772274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.772614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.772620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.772977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.772984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.773335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.773342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.773574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.773581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.773902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.773908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.774240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.774246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.774520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.774528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.774877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.774883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.775232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.775239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.775592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.775599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.775952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.775960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.776213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.776220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:00.846 [2024-06-10 11:52:29.776562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:00.846 [2024-06-10 11:52:29.776570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:00.846 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.776821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.776829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.777176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.777184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.777551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.777559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.777938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.777945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.778286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.778293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.779244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.779260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.779589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.779597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.779946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.779953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.780302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.780308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.780677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.780684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.781032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.781039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.781405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.781412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.781765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.781773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.782150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.782156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.782344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.782351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.782731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.782738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.783092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.783099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.783444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.783451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.783783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.783790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.784158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.784165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.784512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.784518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.784868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.784875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.785191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.785198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.785571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.785579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.785929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.785936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.786322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.786329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.786657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.786664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.786975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.786982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.787345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.787352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.787699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.787707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.788074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.788082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.788413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.788420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.788743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.788750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.789108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.789116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.789477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.789484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.789958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.123 [2024-06-10 11:52:29.789965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.123 qpair failed and we were unable to recover it. 00:44:01.123 [2024-06-10 11:52:29.790300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.790306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.790660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.790668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.791023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.791029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.791275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.791281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.791634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.791996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.792002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.792363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.792370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.792762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.792770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.793140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.793147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.793497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.793503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.793828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.793835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.794237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.794243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.794631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.794637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.794887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.794894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.795262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.795268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.795594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.795600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.795939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.795954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.796205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.796212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.796548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.796555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.796891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.796898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.797253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.797261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.797610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.797617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.797955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.797962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.798292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.798299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.798650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.798657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.799069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.799077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.799448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.799455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.799684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.799691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.800033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.800040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.800365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.800372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.800745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.800751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.801091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.801098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.801282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.801290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.801627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.801633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.801874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.801882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.802220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.802227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.802576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.802583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.802894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.802902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.803231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.803239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.803572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.803580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.803827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.803833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.804177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.804184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.804549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.804556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.804934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.804941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.805268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.805275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.805620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.805627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.805995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.806002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.806330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.806336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.806661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.806668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.807022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.807029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.124 [2024-06-10 11:52:29.807215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.124 [2024-06-10 11:52:29.807223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.124 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.807603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.807610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.807844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.807851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.808178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.808185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.808529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.808537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.808903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.808910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.809236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.809242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.809500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.809507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.809883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.809891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.810225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.810231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.810597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.810604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.810957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.810964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.811371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.811378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.811738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.811745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.812095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.812103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.812470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.812478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.812837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.813247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.813254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.813590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.813597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.813947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.813954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.814321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.814329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.814679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.814687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.815036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.815042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.815366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.815373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.815740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.815747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.816101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.816115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.816456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.816463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.816754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.816762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.817100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.817107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.817473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.817481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.817904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.817911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.818252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.818259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.818598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.818605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.818879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.818887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.819243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.819249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.819642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.819649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.819990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.819998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.820406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.820413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.820586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.820594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.820919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.820927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.821264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.821270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.821713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.821721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.821978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.821984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.822349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.822357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.822596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.822604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.822955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.822961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.823290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.823298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.823649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.823656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.824034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.824042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.824416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.824424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.125 [2024-06-10 11:52:29.824607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.125 [2024-06-10 11:52:29.824615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.125 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.824852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.824860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.825200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.825207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.825582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.825590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.825943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.825950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.826288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.826296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.826675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.826683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.827006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.827012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.827186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.827193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.827394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.827409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.827757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.827763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.828113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.828121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.828472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.828479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.828806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.828813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.829177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.829184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.829558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.829564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.829878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.829893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.830238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.830246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.830580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.830588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.830967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.830973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.831216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.831223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.831576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.831583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.831886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.831894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.832214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.832221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.832573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.832580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.832941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.832948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.833180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.833186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.833550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.833557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.833806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.833813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.834054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.834061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.834422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.834430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.834804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.834811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.835146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.835152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.835509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.835515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.835742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.835749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.836080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.836086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.836436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.836442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.836688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.836695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.836938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.836945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.837340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.837347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.837690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.837697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.837912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.837918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.126 [2024-06-10 11:52:29.838132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.126 [2024-06-10 11:52:29.838139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.126 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.838513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.838519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.838797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.838804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.839159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.839165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.839351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.839358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.839681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.839687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.840046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.840052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.840433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.840440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.840577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.840583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.840971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.840978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.841233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.841240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.841595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.841601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.841860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.841867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.842246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.842253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.842643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.842651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.842998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.843009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.843356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.843362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.843614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.843621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.844027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.844034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.844258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.844265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.844513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.844520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.844866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.844873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.845255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.845261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.845598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.845605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.845953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.845960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.846210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.846216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.846565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.846572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.846795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.846802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.847188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.847195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.847571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.847578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.847832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.847840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.848192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.848199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.848535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.848542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.848903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.848909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.849258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.849272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.849612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.849618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.849893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.849900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.850120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.850126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.850356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.850362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.850692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.850699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.850933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.850939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.851290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.851297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.851655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.851662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.851867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.851874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.852297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.852304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.852636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.852643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.852994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.853001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.853372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.853378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.853628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.853635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.854016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.854023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.854369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.127 [2024-06-10 11:52:29.854375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.127 qpair failed and we were unable to recover it. 00:44:01.127 [2024-06-10 11:52:29.854734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.854740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.854972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.854978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.855313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.855320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.855662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.855672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.856054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.856062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.856306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.856312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.856581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.856588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.856942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.856948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.857281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.857288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.857666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.857680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.857735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.857742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.858042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.858048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.858389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.858395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.858603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.858610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.858847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.858854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.859246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.859253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.859579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.859586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.859928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.859935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.860272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.860278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.860611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.860617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.860975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.860981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.861195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.861202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.861548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.861555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.861892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.861899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.862247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.862253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.862484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.862490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.862623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.862629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.862973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.862980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.863190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.863197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.863596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.863602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.863997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.864004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.864264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.864271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.864648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.864654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.864906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.864913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.865256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.865263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.865599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.865605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.865941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.865948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.866298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.866305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.866674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.866682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.867010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.867017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.867395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.867401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.867732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.867738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.867897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.867903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.868121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.868127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.868462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.868470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.868812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.868820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.869198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.869204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.869559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.869566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.869811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.869818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.128 qpair failed and we were unable to recover it. 00:44:01.128 [2024-06-10 11:52:29.870132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.128 [2024-06-10 11:52:29.870139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.870429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.870435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.870724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.870730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.871083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.871089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.871265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.871272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.871554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.871562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.871783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.871790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.872088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.872095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.872413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.872420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.872757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.872763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.873145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.873152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.873501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.873507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.873834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.873842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.874169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.874175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.874498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.874505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.874844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.874851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.875069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.875076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.875316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.875322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.875580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.875586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.875908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.875915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.876286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.876292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.876638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.876645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.877006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.877013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.877181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.877188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.877615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.877621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.878015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.878022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.878374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.878381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.878770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.878776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.879069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.879076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.879439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.879445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.879628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.879635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.879966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.879973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.880361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.880367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.880614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.880621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.880956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.880963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.881209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.881217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.881481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.881487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.881712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.881719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.881956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.881962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.882332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.882339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.882583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.129 [2024-06-10 11:52:29.882590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.129 qpair failed and we were unable to recover it. 00:44:01.129 [2024-06-10 11:52:29.882938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.882946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.883316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.883322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.883657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.883665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.884008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.884015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.884352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.884360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.884538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.884545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.884844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.884851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.885203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.885210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.885373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.885380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.885606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.885613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.885959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.885966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.886191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.886197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.886544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.886551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.886883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.886890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.887256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.887263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.887563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.887570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.887908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.887914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.888126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.888133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.888509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.888517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.888778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.888784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.889138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.889145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.889486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.889493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.889843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.889850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.890073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.890080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.890464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.890470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.890727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.890734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.890973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.890979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.891312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.891318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.891665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.891674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.891966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.891973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.892226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.892233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.892567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.892574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.892812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.892818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.893142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.893148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.893498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.893506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.893686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.130 [2024-06-10 11:52:29.893694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.130 qpair failed and we were unable to recover it. 00:44:01.130 [2024-06-10 11:52:29.894018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.894024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.894396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.894402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.894755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.894762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.895127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.895133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.895460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.895466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.895811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.895818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.896164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.896170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.896543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.896549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.896893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.896900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.897247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.897254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.897515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.897522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.897849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.897857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.898234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.898241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.898512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.898518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.898687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.898695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.899059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.899066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.899437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.899444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.899913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.899940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.900285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.900293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.900654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.900674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.901026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.901032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.901362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.901368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.901566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.901573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.901908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.901915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.902233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.902240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.902594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.902601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.902815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.902822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.903176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.903182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.903512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.903518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.903854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.903861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.904249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.904256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.904603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.904611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.904918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.904925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.905294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.905301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.905637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.905644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.906002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.906009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.906377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.906383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.906717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.906724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.907082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.907092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.907293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.907301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.907640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.907647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.907978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.131 [2024-06-10 11:52:29.907985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.131 qpair failed and we were unable to recover it. 00:44:01.131 [2024-06-10 11:52:29.908310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.908318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.908666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.908676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.908877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.908884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.909235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.909241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.909586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.909593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.909945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.909952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.910318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.910325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.910668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.910678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.911005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.911011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.911370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.911377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.911750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.911757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.912084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.912090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.912443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.912450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.912799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.912805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.913137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.913143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.913475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.913481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.913679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.913687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.913997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.914004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.914372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.914379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.914644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.914650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.914996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.915003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.915331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.915337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.915677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.915684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.916061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.916067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.916246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.916253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.916552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.916559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.916899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.916906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.917219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.917225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.917554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.917561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.917813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.917819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.918031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.918038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.918450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.918457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.918694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.918701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.918890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.918897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.919230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.919236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.919608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.919614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.919989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.919997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.920175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.920182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.920578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.920585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.920926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.920933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.921282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.921289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.921614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.921620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.921968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.921975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.922300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.132 [2024-06-10 11:52:29.922307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.132 qpair failed and we were unable to recover it. 00:44:01.132 [2024-06-10 11:52:29.922636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.922642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.922997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.923004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.923366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.923372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.923699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.923706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.924036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.924043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.924390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.924396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.924763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.924770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.925104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.925111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.925362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.925369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.925728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.925736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.926083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.926089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.926425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.926431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.926770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.926776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.927036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.927042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.927416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.927422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.927763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.927770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.928137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.928144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.928491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.928497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.928872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.928878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.929219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.929226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.929528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.929535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.929853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.929861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.930194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.930200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.930529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.930535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.930877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.930884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.931227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.931233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.931556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.931562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.931919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.931925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.932271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.932277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.932611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.932618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.932956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.932963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.933294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.933302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.933608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.933616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.933969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.933976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.934308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.934315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.934688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.934695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.935040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.935046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.935374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.935380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.935725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.935732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.936100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.936106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.936434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.936440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.936739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.936745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.937107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.937113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.937447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.937454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.937803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.133 [2024-06-10 11:52:29.937810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.133 qpair failed and we were unable to recover it. 00:44:01.133 [2024-06-10 11:52:29.938166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.938173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.938500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.938506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.938860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.938866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.939211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.939217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.939551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.939558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.939914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.939921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.940290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.940297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.940641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.940648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.940997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.941003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.941330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.941336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.941682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.941689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.942045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.942052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.942295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.942302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.942675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.942683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.943013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.943020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.943357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.943364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.943606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.943613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.943996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.944002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.944368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.944374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.944725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.944731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.945145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.945152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.945520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.945527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.945851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.945858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.946204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.946210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.946455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.946461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.946679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.946685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.947058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.947064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.947396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.947403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.947756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.947763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.948105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.948112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.948443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.948449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.948803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.948810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.949170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.949176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.949500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.949507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.949872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.949879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.950220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.950227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.950567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.950574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.134 [2024-06-10 11:52:29.950923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.134 [2024-06-10 11:52:29.950929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.134 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.951111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.951119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.951512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.951519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.951873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.951879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.952211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.952218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.952463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.952469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.952674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.952681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.952997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.953004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.953378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.953384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.953738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.953746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.954097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.954103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.954430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.954437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.954610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.954617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.954940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.954947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.955198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.955205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.955409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.955416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.955767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.955774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.956106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.956114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.956473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.956479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.956845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.956852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.957179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.957185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.957532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.957539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.957882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.957889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.958063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.958071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.958398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.958404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.958729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.958737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.959092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.959098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.959463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.959470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.959730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.959737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.960073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.960079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.960429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.960436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.960638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.960645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.961009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.961016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.961379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.961385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.961620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.961627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.961956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.961963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.962374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.962380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.962581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.962588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.962942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.962949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.963337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.963344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.963714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.963720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.964050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.964056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.964404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.964411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.964804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.964811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.965156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.965162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.965537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.965543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.965863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.965870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.135 [2024-06-10 11:52:29.966233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.135 [2024-06-10 11:52:29.966239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.135 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.966575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.966582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.966925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.966932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.967216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.967224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.967570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.967576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.967964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.967970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.968388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.968394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.968635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.968642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.968839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.968847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.969227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.969233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.969604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.969611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.969860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.969867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.970283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.970290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.970619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.970625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.970787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.970794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.971232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.971239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.971572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.971579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.972036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.972044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.972405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.972412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.972837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.972844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.973177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.973183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.973530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.973537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.973907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.973914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.974146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.974152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.974373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.974379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.974713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.974720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.975015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.975021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.975347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.975354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.975592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.975598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.975994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.976001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.976334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.976341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.976690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.976696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.976872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.976879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.977082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.977095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.977435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.977441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.977768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.977775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.978125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.978131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.978378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.978385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.978730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.978737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.979092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.979099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.979445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.979451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.979818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.979825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.980182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.980189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.980535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.980541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.980882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.980889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.981263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.981269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.981612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.981618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.981948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.981955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.982300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.982307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.982640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.982647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.983004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.983012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.983337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.983344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.983689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.983697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.984064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.984070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.984272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.984279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.984496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.984502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.984688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.984694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.985044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.985051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.985406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.985413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.985762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.985769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.986095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.986101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.136 [2024-06-10 11:52:29.986446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.136 [2024-06-10 11:52:29.986452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.136 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.986816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.986823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.987159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.987165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.987529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.987536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.987901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.987907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.988233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.988239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.988600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.988608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.988962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.988969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.989296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.989302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.989652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.989659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.990007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.990015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.990339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.990345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.990694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.990701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.991058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.991064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.991401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.991408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.991757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.991764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.992095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.992102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.992258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.992265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.992570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.992584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.992935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.992941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.993271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.993278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.993632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.993638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.993987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.993994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.994318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.994324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.994680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.994687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.995033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.995040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.995371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.995378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.995733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.995740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.995998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.996004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.996337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.996346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.996703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.996710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.997038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.997044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.997381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.997388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.997745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.997752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.998098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.998104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.998392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.998398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.998767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.998773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.999152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.999159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.999505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.999512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:29.999843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:29.999850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.000164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.000178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.000518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.000524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.000852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.000859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.001218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.001225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.001588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.001595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.001990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.001997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.002167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.002176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.003044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.003054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.003273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.003280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.003487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.003494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.003925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.003932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.004301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.004308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.004724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.004731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.137 [2024-06-10 11:52:30.005029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.137 [2024-06-10 11:52:30.005035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.137 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.005376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.005383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.005430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.005437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.005804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.005811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.006120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.006126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.006361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.006367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.006621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.006627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.007006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.007013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.007392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.007399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.007777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.007783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.008021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.008028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.008395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.008401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.008634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.008641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.008994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.009001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.009302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.009309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.009661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.009668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.010042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.010050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.010397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.010404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.010757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.010764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.011127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.011134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.011497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.011504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.011750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.011758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.011996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.012002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.012383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.012389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.012726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.012733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.013163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.013170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.013530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.013536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.013898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.013905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.014270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.014277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.014648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.014655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.015037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.015044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.015417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.015425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.015810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.015817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.016053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.016060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.016414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.016420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.016772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.016780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.017052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.017058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.017386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.017393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.017792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.017799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.018156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.018162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.018499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.018505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.018811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.018818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.019026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.019034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.019405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.019411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.019751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.019758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.020101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.020108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.020457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.020463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.020703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.020710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.020961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.020967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.021194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.021201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.021486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.021493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.021598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.138 [2024-06-10 11:52:30.021605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.138 qpair failed and we were unable to recover it. 00:44:01.138 [2024-06-10 11:52:30.021758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.021766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.022069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.022076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.022364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.022371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.022548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.022556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.022864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.022872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.023038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.023046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.023427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.023445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.023649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.023659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.023941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.023956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.024389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.024405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.024559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.024589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.024751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.024762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.024905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.024915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.025079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.025106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.025323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.025336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.025816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.025929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.025935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.026091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.026098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.026366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.026373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.026644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.026651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.026799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.026808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.027152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.027158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.027554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.027560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.027950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.027957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.028265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.028271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.028674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.028681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.029070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.029078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.029427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.029434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.029792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.029799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.030154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.030161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.030463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.030470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.030698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.030706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.030936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.030943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.031336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.031343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.031677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.031685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.032029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.032036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.032381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.032388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.032622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.032629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.032984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.032992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.033337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.033343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.033655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.033662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.034028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.034035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.034341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.034348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.034726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.034734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.034968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.034976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.035219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.035226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.035466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.035472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.035811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.035819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.036181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.036188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.036375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.036383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.036700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.036708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.037074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.037081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.037410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.037417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.037665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.037674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.037904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.037911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.038276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.038283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.038614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.038621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.038991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.038998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.039223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.039230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.039580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.039587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.039783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.139 [2024-06-10 11:52:30.039791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.139 qpair failed and we were unable to recover it. 00:44:01.139 [2024-06-10 11:52:30.040048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.040056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.040404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.040410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.040615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.040622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.040969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.040976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.041302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.041310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.041729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.041736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.041931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.041938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.042269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.042277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.042605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.042613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.042847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.042855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.043208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.043215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.043542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.043549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.043789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.043796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.044150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.044157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.044491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.044499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.044875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.044883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.045231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.045239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.045619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.045625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.045965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.045972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.046367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.046373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.046665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.046676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.046896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.047237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.047244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.047609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.047617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.048034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.048041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.048363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.048369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.048557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.048563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.048986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.048994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.049207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.049214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.049568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.049575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.049943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.049951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.050317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.050324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.050573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.050579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.050815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.050822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.051027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.051034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.051408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.051416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.051766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.051773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.051980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.051988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.052318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.052325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.052703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.052710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.053059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.053066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.053277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.053285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.053487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.053493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.053813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.053820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.054168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.054175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.054428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.054436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.054776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.054782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.055128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.055135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.055491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.055503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.055854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.055861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.056234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.056240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.056583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.056590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.056942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.056949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.057276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.057283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.057636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.057642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.058011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.058018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.058346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.058353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.140 qpair failed and we were unable to recover it. 00:44:01.140 [2024-06-10 11:52:30.058707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.140 [2024-06-10 11:52:30.058714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.059071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.059078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.059403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.059411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.059760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.059767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.060108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.060114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.060462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.060469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.060893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.060901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.061248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.061255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.061473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.061480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.061872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.061880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.062096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.062103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.062464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.062470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.062721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.062728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.063056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.063062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.063445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.063451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.063786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.063794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.064285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.064292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.064627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.064633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.064820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.064827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.065168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.065174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.065345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.065352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.065685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.065692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.066021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.066027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.066380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.066387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.066600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.066608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.066835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.066843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.067224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.067230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.067570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.067577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.067750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.067758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.068041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.068049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.068440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.068446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.068800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.068807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.069191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.069198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.069566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.069573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.069925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.069932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.070142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.070149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.070506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.070513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.070865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.070871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.071076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.071083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.071405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.071412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.071608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.071615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.071945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.071952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.072333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.072740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.072747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.073097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.073105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.073358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.073365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.073743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.073751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.074152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.074159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.074488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.074494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.074960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.074967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.075295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.075303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.075659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.075665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.075997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.076004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.076346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.076352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.076721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.076728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.077108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.077115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.077454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.077461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.077821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.077828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.078167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.078173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.078355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.078362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.141 [2024-06-10 11:52:30.078787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.141 [2024-06-10 11:52:30.078794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.141 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.079123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.079131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.079314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.079323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.079651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.079658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.079862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.079869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.080228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.080236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.080607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.080614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.080867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.080875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.081125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.081131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.081508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.081514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.081747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.081754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.082146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.082153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.082449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.082457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.082795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.082802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.083046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.083052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.083431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.083437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.083771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.083777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.083996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.084002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.084362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.084369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.084567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.084574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.084906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.084913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.085183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.085190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.085568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.085574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.085786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.085793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.086112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.086119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.086349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.086356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.086705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.086715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.086952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.086959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.087173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.087179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.087478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.087484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.087733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.087739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.088155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.088161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.088512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.088518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.088785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.088792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.089023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.089030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.089261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.089268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.089499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.089506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.089771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.419 [2024-06-10 11:52:30.089779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.419 qpair failed and we were unable to recover it. 00:44:01.419 [2024-06-10 11:52:30.090005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.090012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.090377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.090384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.090680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.090687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.090776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.090782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.090904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.090911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.091198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.091205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.091385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.091393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.091561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.091568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.091747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.091754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.092019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.092026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.092345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.092352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.092758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.092767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.093186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.093194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.093581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.093588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.094024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.094032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.094097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.094105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.094334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.094342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.094678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.094685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.094985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.094992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.095393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.095401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.095698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.095705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.095927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.095933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.096163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.096170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.096570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.096576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.096975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.096982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.097330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.097337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.097751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.097758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.097969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.097977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.098352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.098361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.098540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.098547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.098856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.098863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.099262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.099269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.099557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.099564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.099933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.099940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.100123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.100130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.100395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.100402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.100777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.100785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.101043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.420 [2024-06-10 11:52:30.101050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.420 qpair failed and we were unable to recover it. 00:44:01.420 [2024-06-10 11:52:30.101106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.101112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.101426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.101433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.101790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.101797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.102141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.102149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.102399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.102406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.102746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.102754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.103073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.103081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.103338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.103345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.103578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.103585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.103991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.103998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.104347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.104355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.104705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.104712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.105060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.105068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.105382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.105389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.105637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.105644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.106004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.106011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.106343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.106350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.106710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.106718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.107074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.107081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.107389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.107397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.107742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.107749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.107967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.107973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.108285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.108292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.108636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.108643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.109022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.109029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.109390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.109397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.109729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.109736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.109925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.109933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.110144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.110151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.110567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.110574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.110969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.110976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.111330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.111338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.111711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.111718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.112069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.112076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.112288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.112295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.112568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.112576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.112938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.112945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.113282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.113290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.421 qpair failed and we were unable to recover it. 00:44:01.421 [2024-06-10 11:52:30.113677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.421 [2024-06-10 11:52:30.113685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.114036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.114042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.114248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.114255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.114610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.114616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.114952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.114959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.115315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.115322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.115678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.115686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.116070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.116077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.116315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.116322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.116665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.116676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.116871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.116878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.117233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.117240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.117581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.117589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.117926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.117933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.118112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.118119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.118431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.118438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.118596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.118602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.118977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.118984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.119330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.119338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.119720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.119729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.119919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.119927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.120206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.120213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.120518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.120524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.120892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.120899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.121236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.121244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.121573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.121581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.121933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.121940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.122263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.122271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.122582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.122589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.123025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.123031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.123366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.123374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.123526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.123534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.123901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.123908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.124227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.124239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.124519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.124526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.124862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.124869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.125273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.125281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.125512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.125518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.125859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.422 [2024-06-10 11:52:30.125866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.422 qpair failed and we were unable to recover it. 00:44:01.422 [2024-06-10 11:52:30.126023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.126031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.126392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.126399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.126601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.126608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.126966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.126974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.127312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.127318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.127676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.127684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.128058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.128065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.128408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.128415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.128762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.128769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.129101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.129108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.129468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.129483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.129840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.129847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.130049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.130056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.130480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.130486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.130720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.130727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.131111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.131117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.131488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.131496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.131847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.131854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.132215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.132221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.132493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.132500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.132827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.132835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.133185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.133192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.133567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.133573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.133924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.133932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.134312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.134318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.134648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.134655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.135019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.135026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.135425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.135432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.135734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.135741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.423 [2024-06-10 11:52:30.135962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.423 [2024-06-10 11:52:30.135969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.423 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.136335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.136342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.136714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.136721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.137076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.137083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.137404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.137411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.137761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.137768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.138145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.138152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.138485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.138493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.138732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.138739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.139111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.139118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.139373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.139380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.139768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.139774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.140115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.140121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.140445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.140452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.140826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.140833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.141150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.141157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.141510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.141517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.141867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.141874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.142204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.142210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.142510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.142517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.142859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.142866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.143221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.143236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.143583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.143590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.143954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.143961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.144175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.144182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.144535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.144542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.144912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.144919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.145251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.145257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.145622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.145633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.145986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.145993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.146323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.146330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.146654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.146662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.146836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.146844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.147172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.147179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.147545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.147552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.147638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.147644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.147949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.147956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.148159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.148166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.148519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.424 [2024-06-10 11:52:30.148527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.424 qpair failed and we were unable to recover it. 00:44:01.424 [2024-06-10 11:52:30.148702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.148710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.149049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.149056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.149373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.149379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.149786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.149793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.150004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.150011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.150311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.150317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.150665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.150675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.151030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.151364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.151371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.151703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.151711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.152045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.152051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.152387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.152394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.152718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.152725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.153038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.153045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.153408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.153414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.153727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.153734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.154059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.154065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.154394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.154400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.154721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.154729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.155074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.155081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.155406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.155413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.155822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.155829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.156179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.156186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.156496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.156503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.156879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.156886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.157215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.157221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.157535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.157541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.157873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.157880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.158253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.158260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.158593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.158600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.158947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.158954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.159282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.159289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.159692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.159700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.160014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.160020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.160388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.160395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.160724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.160731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.160923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.425 [2024-06-10 11:52:30.160931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.425 qpair failed and we were unable to recover it. 00:44:01.425 [2024-06-10 11:52:30.161249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.161255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.161626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.161632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.161875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.161882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.162224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.162231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.162557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.162563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.162823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.162830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.163198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.163204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.163530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.163536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.163865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.163872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.164242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.164249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.164492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.164499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.164747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.164753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.165108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.165115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.165466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.165472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.165701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.165708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.165941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.165955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.166311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.166318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.166675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.166681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.167033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.167039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.167367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.167374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.167726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.167733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.168076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.168082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.168415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.168421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.168646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.168652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.169007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.169014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.169337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.169343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.169667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.169679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.170119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.170125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.170362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.170368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.170666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.170677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.171021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.171028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.171398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.171404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.171734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.171741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.172073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.172080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.172425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.172432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.172767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.172775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.173082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.173283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.173291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.426 qpair failed and we were unable to recover it. 00:44:01.426 [2024-06-10 11:52:30.173677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.426 [2024-06-10 11:52:30.173684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.173906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.173914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.174123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.174129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.174472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.174478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.174803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.174810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.175149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.175156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.175491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.175497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.175916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.175923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.176159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.176166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.176358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.176365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.176608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.176615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.176941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.176948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.177356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.177363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.177690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.177697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.178067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.178074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.178445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.178452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.178822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.178829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.179239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.179246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.179576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.179583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.179924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.179931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.180260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.180266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.180631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.180637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.180973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.180980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.181347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.181353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.181688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.181695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.182023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.182030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.182279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.182286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.182621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.182627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.182998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.183005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.183340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.183347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.183676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.183682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.184041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.184048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.184418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.184424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.184794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.184801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.185215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.185221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.185557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.185564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.185932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.185939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.186305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.186314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.186654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.427 [2024-06-10 11:52:30.186661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.427 qpair failed and we were unable to recover it. 00:44:01.427 [2024-06-10 11:52:30.187033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.187040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.187365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.187371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.187698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.187705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.188058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.188064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.188356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.188362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.188604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.188611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.188928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.188936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.189278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.189284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.189522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.189528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.189879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.189886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.190260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.190266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.190595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.190602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.190782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.190790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.191106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.191112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.191436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.191443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.191813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.191819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.192177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.192183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.192519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.192526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.192857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.192864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.193112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.193119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.193338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.193345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.193604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.193612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.193933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.193940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.194305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.194313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.194600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.194606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.194948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.194955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.195323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.195329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.195702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.195709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.196050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.196058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.196404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.196411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.196662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.196672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.428 [2024-06-10 11:52:30.197019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.428 [2024-06-10 11:52:30.197026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.428 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.197213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.197220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.197534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.197540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.197910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.197917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.198247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.198254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.198673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.198679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.198924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.198931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.199298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.199307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.199639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.199646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.199992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.199999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.200344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.200350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.200594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.200601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.200813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.200820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.201153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.201159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.201528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.201534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.201945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.201951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.202285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.202292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.202639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.202646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.202837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.202844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.203250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.203257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.203617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.203624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.204048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.204054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.204374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.204380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.204532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.204538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.204939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.204946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.205274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.205280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.205606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.205612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.205960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.205967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.206225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.206231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.206568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.206574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.206921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.206929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.207243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.207250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.207589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.207595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.207927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.207934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.208196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.208202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.208533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.208539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.208867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.208874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.209192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.209199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.209517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.429 [2024-06-10 11:52:30.209525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.429 qpair failed and we were unable to recover it. 00:44:01.429 [2024-06-10 11:52:30.209862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.209868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.210086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.210093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.210183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.210190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.210494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.210500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.210762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.210769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.211121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.211128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.211463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.211469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.211831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.211838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.212143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.212151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.212480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.212486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.212857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.212864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.213207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.213213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.213457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.213464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.213806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.213812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.214170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.214176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.214499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.214506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.214931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.214937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.215266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.215272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.215644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.215651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.216005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.216011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.216345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.216351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.216536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.216543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.216902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.216908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.217139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.217145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.217561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.217567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.217792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.217799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.218179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.218186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.218513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.218519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.218887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.218893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.219263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.219270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.219640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.219646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.219978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.219984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.220355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.220362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.220617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.220623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.220872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.220879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.221146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.221152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.221493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.221499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.221740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.221747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.430 [2024-06-10 11:52:30.222116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.430 [2024-06-10 11:52:30.222122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.430 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.222452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.222458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.222789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.222796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.223127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.223134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.223363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.223369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.223719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.223726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.224026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.224032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.224259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.224266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.224619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.224625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.224976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.224983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.225309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.225318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.225646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.225653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.226034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.226041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.226407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.226414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.226760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.226766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.227144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.227150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.227331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.227338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.227515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.227521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.227756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.227763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.228137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.228144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.228477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.228483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.228816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.228823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.228999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.229006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.229426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.229432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.229806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.229812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.230142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.230149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.230481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.230489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.230828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.230835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.231165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.231173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.231480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.231487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.231909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.231916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.232244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.232250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.232622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.232628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.232991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.232997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.233322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.233329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.233614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.233621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.233945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.233953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.234314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.234322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.234576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.431 [2024-06-10 11:52:30.234582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.431 qpair failed and we were unable to recover it. 00:44:01.431 [2024-06-10 11:52:30.234903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.234910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.235236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.235242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.235569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.235575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.235828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.235835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.236216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.236222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.236552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.236559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.236770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.236777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.237141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.237148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.237525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.237532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.237863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.237870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.238151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.238157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.238447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.238455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.238799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.238805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.239255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.239261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.239582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.239589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.239929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.239935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.240265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.240271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.240597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.240603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.240940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.240947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.241322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.241328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.241657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.241664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.242054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.242062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.242420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.242427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.242773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.242780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.243020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.243026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.243394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.243401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.243747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.243754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.244095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.244102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.244446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.244452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.244831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.244838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.245163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.245170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.245509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.245516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.432 qpair failed and we were unable to recover it. 00:44:01.432 [2024-06-10 11:52:30.245878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.432 [2024-06-10 11:52:30.245885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.246210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.246217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.246542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.246548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.246884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.246891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.247225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.247231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.247573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.247579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.247932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.247938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.248264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.248270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.248472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.248479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.248723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.248730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.249011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.249360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.249367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.249730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.249737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.250099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.250105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.250437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.250444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.250778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.250785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.251196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.251202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.251532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.251539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.251864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.251871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.252210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.252219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.252548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.252555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.252798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.252805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.253140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.253146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.253477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.253483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.253802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.253809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.254149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.254155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.254484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.254490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.254860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.254867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.255275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.255282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.255615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.255622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.255995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.256002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.256328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.256334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.256662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.256668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.257002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.257009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.257340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.257346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.257678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.257685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.258054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.258061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.258430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.258437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.433 [2024-06-10 11:52:30.258783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.433 [2024-06-10 11:52:30.258790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.433 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.259136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.259142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.259472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.259478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.259804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.259811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.260141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.260147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.260475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.260482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.260822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.260829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.261198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.261205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.261450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.261457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.261815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.261822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.262109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.262115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.262469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.262475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.262801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.262808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.263133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.263139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.263471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.263477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.263803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.263810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.264141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.264148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.264498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.264505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.264851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.264859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.265194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.265200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.265527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.265533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.265865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.265873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.266188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.266194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.266323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.266329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.266655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.266662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.266991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.266998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.267324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.267330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.267657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.267663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.267985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.267992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.268344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.268351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.268604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.268611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.268927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.268934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.269301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.269307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.269631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.269637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.269953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.269960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.270328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.270335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.270588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.270595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.270931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.270938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.271303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.434 [2024-06-10 11:52:30.271309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.434 qpair failed and we were unable to recover it. 00:44:01.434 [2024-06-10 11:52:30.271634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.271640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.271888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.271894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.272225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.272231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.272566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.272572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.272976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.272983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.273309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.273315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.273633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.273640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.273995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.274003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.274339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.274346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.274680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.274687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.275034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.275041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.275411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.275418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.275785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.275792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.276120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.276126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.276477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.276484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.276825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.276832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.277150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.277156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.277235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.277242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.277430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.277437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.277796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.277802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.278175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.278181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.278508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.278514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.278716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.278723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.279047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.279053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.279419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.279425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.279752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.279759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.280019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.280026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.280374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.280380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.280704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.280711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.281045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.281051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.281426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.281433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.281780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.281787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.282121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.282127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.282362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.282368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.282702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.282708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.283053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.283060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.283405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.283412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.283740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.283747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.283932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.435 [2024-06-10 11:52:30.283940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.435 qpair failed and we were unable to recover it. 00:44:01.435 [2024-06-10 11:52:30.284235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.284241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.284547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.284554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.284919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.284926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.285252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.285258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.285621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.285627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.285921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.285928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.286260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.286267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.286643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.286650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.286997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.287005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.287358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.287365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.287698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.287707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.287965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.287971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.288298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.288304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.288637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.288644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.288905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.288911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.289239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.289246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.289573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.289579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.289928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.289934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.290265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.290272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.290637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.290644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.290985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.290992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.291338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.291345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.291684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.291690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.292051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.292057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.292424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.292430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.292799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.292806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.292986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.292993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.293264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.293270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.293602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.293608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.293999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.294005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.294331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.294337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.294676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.294682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.295032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.295038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.295276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.295283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.295540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.295546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.295879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.295885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.296078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.296085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.296281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.296288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.436 qpair failed and we were unable to recover it. 00:44:01.436 [2024-06-10 11:52:30.296515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.436 [2024-06-10 11:52:30.296522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.296779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.296786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.297170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.297176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.297413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.297421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.297794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.297801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.298026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.298033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.298336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.298342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.298640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.298647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.299000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.299006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.299361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.299369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.299709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.299717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.299983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.299990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.300238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.300248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.300627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.300633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.301002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.301009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.301417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.301424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.301675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.301683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.301931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.301939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.302270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.302276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.302645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.302651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.303030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.303038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.303382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.303390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.303738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.303744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.304073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.304079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.304445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.304452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.304703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.304710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.305057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.305063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.305392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.305398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.305771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.305778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.306188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.306195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.306523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.306530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.306879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.306887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.307225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.437 [2024-06-10 11:52:30.307231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.437 qpair failed and we were unable to recover it. 00:44:01.437 [2024-06-10 11:52:30.307566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.307573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.307923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.307930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.308330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.308336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.308656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.308662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.308989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.308995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.309322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.309328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.309614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.309621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.309944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.309952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.310284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.310291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.310637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.310643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.310973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.310979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.311305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.311311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.311646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.311652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.311993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.311999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.312242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.312249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.312615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.312621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.312951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.312957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.313282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.313288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.313496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.313504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.313854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.313862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.314190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.314197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.314528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.314534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.314854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.314861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.315233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.315239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.315576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.315583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.315929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.315935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.316280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.316286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.316612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.316619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.317033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.317040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.317301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.317307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.317618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.317624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.317955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.317962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.318291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.318298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.318636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.318643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.318994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.319001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.319290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.319297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.319677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.319684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.320023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.320030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.438 qpair failed and we were unable to recover it. 00:44:01.438 [2024-06-10 11:52:30.320377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.438 [2024-06-10 11:52:30.320383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.320710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.320716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.321051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.321058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.321293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.321299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.321651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.321657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.322019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.322026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.322394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.322401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.322758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.322765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.322956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.322963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.323321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.323328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.323564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.323571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.323959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.323965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.324290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.324297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.324624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.324630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.325040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.325046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.325227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.325235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.325452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.325459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.325788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.325795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.326123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.326129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.326473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.326480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.326736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.326743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.327069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.327078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.327282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.327289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.327653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.327660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.327992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.328000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.328422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.328429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.328807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.328814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.329142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.329149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.329482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.329488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.329833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.329840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.330226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.330232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.330646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.330652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.331009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.331017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.331383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.331391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.331753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.331760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.332114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.332120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.332459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.332465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.332857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.332863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.333197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.439 [2024-06-10 11:52:30.333204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.439 qpair failed and we were unable to recover it. 00:44:01.439 [2024-06-10 11:52:30.333526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.333532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.333894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.333901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.334126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.334132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.334398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.334404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.334634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.334641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.335039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.335047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.335376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.335383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.335517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.335524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.335844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.335851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.336179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.336186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.336523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.336529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.336857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.336863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.337106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.337113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.337509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.337516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.337851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.337857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.338189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.338196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.338549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.338555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.338947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.338954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.339281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.339287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.339613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.339620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.339950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.339957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.340249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.340255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.340616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.340624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.340973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.340980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.341307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.341313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.341639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.341645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.342041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.342048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.342393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.342399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.342738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.342745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.343104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.343110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.343441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.343448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.343820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.343826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.344118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.344125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.344468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.344475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.344821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.344828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.345156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.345163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.345548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.345554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.345891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.345897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.440 [2024-06-10 11:52:30.346239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.440 [2024-06-10 11:52:30.346246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.440 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.346613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.346621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.346954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.346961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.347290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.347296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.347624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.347630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.347823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.347830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.348196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.348202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.348571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.348577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.348973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.348979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.349305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.349312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.349558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.349564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.349913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.349919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.350251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.350257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.350587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.350594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.351015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.351022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.351350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.351356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.351727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.351733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.352064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.352071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.352398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.352404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.352734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.352741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.353117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.353123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.353451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.353458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.353824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.353830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.354159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.354165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.354496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.354514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.354853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.354860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.355191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.355198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.355548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.355555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.355898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.355904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.356304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.356310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.356588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.356594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.356971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.356979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.357350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.441 [2024-06-10 11:52:30.357357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.441 qpair failed and we were unable to recover it. 00:44:01.441 [2024-06-10 11:52:30.357729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.357736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.358073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.358080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.358407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.358413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.358744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.358751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.359125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.359132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.359479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.359485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.359813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.359820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.360145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.360151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.360469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.360476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.360660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.360667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.360887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.360894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.361220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.361227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.361534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.361541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.361757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.361764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.362066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.362073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.362435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.362441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.362769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.362775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.363103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.363111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.363462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.363468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.363801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.363808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.364159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.364165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.364492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.364499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.364845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.364852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.365182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.365188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.365559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.365565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.365973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.365981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.366167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.366174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.366546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.366554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.366879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.366886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.367150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.367157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.367511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.367518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.367844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.367853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.368198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.368204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.368579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.368586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.368883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.368890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.369233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.369239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.369644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.369650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.369904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.369911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.442 [2024-06-10 11:52:30.370159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.442 [2024-06-10 11:52:30.370166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.442 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.370531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.370538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.370793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.370800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.371136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.371142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.371323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.371330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.371652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.371659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.371995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.372001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.372329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.372336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.372664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.372674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.373036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.373043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.373370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.373377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.373620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.373626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.443 [2024-06-10 11:52:30.373974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.443 [2024-06-10 11:52:30.373981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.443 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.374196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.374204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.374549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.374557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.374750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.374759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.375103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.375111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.375462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.375468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.375720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.375728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.376073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.376079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.376405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.376412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.376743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.376750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.377104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.377111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.377437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.377443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.377773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.377780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.378108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.378114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.378348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.378355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.378704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.378711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.378961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.378967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.379179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.379192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.379546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.379553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.379900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.379907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.380240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.380246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.380572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.380582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.380906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.380913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.381234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.381240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.381567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.381574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.381916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.381923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.382109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.382116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.382480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.382487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.717 qpair failed and we were unable to recover it. 00:44:01.717 [2024-06-10 11:52:30.382835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.717 [2024-06-10 11:52:30.382842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.383169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.383177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.383546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.383553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.383898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.383905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.384233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.384239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.384566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.384573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.384902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.384909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.385244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.385250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.385588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.385595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.385950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.385957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.386295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.386301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.386634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.386641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.386974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.386982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.387348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.387356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.387489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.387495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.387827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.387834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.388161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.388168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.388503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.388510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.388884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.388891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.389246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.389253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.389580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.389586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.389918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.389925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.390254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.390261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.390568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.390576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.390801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.390808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.391172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.391179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.391550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.391557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.391925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.391932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.392260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.392267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.392443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.392450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.392759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.392766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.393173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.393180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.393417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.393424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.393738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.393746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.394121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.394128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.394457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.394464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.394812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.394819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.395147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.395154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.718 qpair failed and we were unable to recover it. 00:44:01.718 [2024-06-10 11:52:30.395477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.718 [2024-06-10 11:52:30.395483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.395822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.395829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.396150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.396157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.396485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.396491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.396864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.396871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.397203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.397210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.397536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.397542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.397883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.397890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.398219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.398225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.398555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.398562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.398935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.398943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.399309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.399316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.399683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.399690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.400010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.400016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.400264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.400270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.400516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.400523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.400739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.400746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.401120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.401127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.401454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.401461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.401811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.401818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.402147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.402153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.402399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.402405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.402735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.402743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.403075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.403081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.403410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.403416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.403745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.403751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.404103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.404110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.404479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.404486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.404858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.404864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.405197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.405203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.405534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.405540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.405717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.405725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.406022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.406029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.406234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.406240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.406417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.406424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.406853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.406861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.407198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.407205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.407533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.719 [2024-06-10 11:52:30.407539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.719 qpair failed and we were unable to recover it. 00:44:01.719 [2024-06-10 11:52:30.407790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.407797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.408139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.408145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.408434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.408441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.408792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.408799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.409120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.409127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.409454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.409460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.409786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.409794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.410143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.410150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.410495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.410501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.410836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.410843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.411179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.411185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.411512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.411518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.411765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.411772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.412153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.412159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.412412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.412419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.412770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.412777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.413117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.413123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.413415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.413421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.413773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.413780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.414041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.414048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.414405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.414411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.414745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.414751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.415037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.415044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.415390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.415396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.415729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.415736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.416080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.416086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.416415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.416421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.416749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.416756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.417087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.417093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.417470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.417477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.417844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.417850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.418178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.418186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.418531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.418538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.418885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.418893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.419230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.419237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.419564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.419570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.419933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.419939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.420265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.420273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.720 qpair failed and we were unable to recover it. 00:44:01.720 [2024-06-10 11:52:30.420601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.720 [2024-06-10 11:52:30.420607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.420986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.420992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.421315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.421322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.421649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.421657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.422011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.422018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.422440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.422448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.422795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.422802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.423139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.423147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.423476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.423482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.423665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.423679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.424025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.424032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.424250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.424256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.424592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.424600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.425020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.425027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.425379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.425386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.425820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.425828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.426151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.426158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.426569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.426576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.426920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.426927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.427157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.427164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.427499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.427506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.427833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.427840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.428182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.428189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.428558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.428565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.428983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.428991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.429238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.429244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.429587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.429593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.429960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.429967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.430339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.430347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.430590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.430598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.430953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.721 [2024-06-10 11:52:30.430960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.721 qpair failed and we were unable to recover it. 00:44:01.721 [2024-06-10 11:52:30.431147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.431154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.431498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.431505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.431834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.431841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.432181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.432187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.432514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.432521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.432853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.432860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.433201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.433208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.433540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.433546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.433733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.433744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.434069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.434076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.434406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.434414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.434604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.434613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.434956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.434964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.435337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.435344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.435675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.435681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.436036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.436043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.436371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.436379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.436685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.436693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.437033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.437040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.437328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.437334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.437707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.437713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.438041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.438047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.438381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.438388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.438635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.438643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.438908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.438915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.439292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.439299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.439626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.439633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.439970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.439979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.440194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.440201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.440561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.440569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.440915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.440922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.441249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.441257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.441626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.441635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.442016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.442024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.442361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.442368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.442645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.442652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.443009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.443015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.443347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.443353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.722 qpair failed and we were unable to recover it. 00:44:01.722 [2024-06-10 11:52:30.443685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.722 [2024-06-10 11:52:30.443691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.443926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.443933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.444276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.444282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.444609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.444616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.444951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.444959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.445155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.445162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.445524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.445531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.445865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.445872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.446227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.446234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.446582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.446589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.446924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.446931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.447258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.447264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.447592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.447598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.447942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.447949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.448278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.448285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.448612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.448618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.448948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.448955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.449302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.449309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.449544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.449552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.449891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.449898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.450231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.450237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.450562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.450568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.450914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.450921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.451248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.451255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.451577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.451583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.451927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.451934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.452302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.452309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.452678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.452685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.453031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.453037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.453374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.453380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.453707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.453714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.454053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.454060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.454408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.454414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.454740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.454747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.455080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.455087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.455415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.455422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.455750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.455758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.456014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.456022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.723 qpair failed and we were unable to recover it. 00:44:01.723 [2024-06-10 11:52:30.456359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.723 [2024-06-10 11:52:30.456366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.456699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.456706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.457061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.457067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.457438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.457444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.457833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.457840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.458167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.458173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.458374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.458380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.458745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.458753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.459122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.459129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.459458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.459464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.459790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.459797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.460149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.460155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.460530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.460536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.460866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.460873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.461204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.461211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.461549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.461556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.461900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.461907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.462264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.462271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.462608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.462614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.462947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.462954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.463281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.463287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.463616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.463622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.463948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.463955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.464288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.464294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.464522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.464528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.464866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.464874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.465221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.465228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.465581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.465588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.465955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.465962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.466268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.466274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.466622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.466628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.466709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.466715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.467034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.467041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.467368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.467375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.467703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.467709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.468065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.468072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.468399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.468406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.468675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.468683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.724 [2024-06-10 11:52:30.468982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.724 [2024-06-10 11:52:30.468989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.724 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.469290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.469298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.469648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.469654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.469986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.469992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.470323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.470329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.470657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.470664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.471002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.471009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.471355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.471361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.471692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.471700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.472014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.472021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.472312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.472318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.472663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.472672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.473000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.473006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.473375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.473381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.473705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.473712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.474039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.474045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.474372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.474379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.474635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.474641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.474846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.474854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.475033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.475040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.475409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.475416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.475764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.475770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.476117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.476123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.476455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.476462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.476796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.476803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.477062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.477069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.477388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.477395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.477768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.477776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.478107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.478115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.478466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.478473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.478819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.478826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.479156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.479162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.479483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.479489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.479667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.479679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.479999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.480006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.480334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.480340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.480711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.480718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.481075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.481081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.481414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.481421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.725 qpair failed and we were unable to recover it. 00:44:01.725 [2024-06-10 11:52:30.481754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.725 [2024-06-10 11:52:30.481761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.482006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.482012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.482324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.482332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.482692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.482699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.483038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.483044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.483220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.483227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.483533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.483540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.483869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.483875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.484249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.484256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.484582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.484588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.484927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.484934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.485262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.485268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.485596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.485602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.485937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.485944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.486274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.486281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.486629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.486636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.486991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.486998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.487363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.487370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.487657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.487663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.487968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.487975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.488158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.488165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.488565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.488572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.488928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.488935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.489186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.489193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.489540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.489546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.489873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.489880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.490129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.490135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.490458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.490464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.490866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.490872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.491070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.491076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.491421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.491428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.491632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.726 [2024-06-10 11:52:30.491639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.726 qpair failed and we were unable to recover it. 00:44:01.726 [2024-06-10 11:52:30.491999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.492006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.492369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.492376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.492723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.492730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.493121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.493128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.493494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.493502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.493718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.493725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.494079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.494086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.494435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.494442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.494793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.494800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.495148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.495154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.495488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.495496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.495843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.495851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.496179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.496186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.496520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.496527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.496889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.496896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.497229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.497236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.497602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.497609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.497954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.497961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.498247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.498254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.498604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.498611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.498941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.498947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.499274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.499280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.499532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.499539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.499883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.499890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.500217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.500223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.500551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.500557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.500924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.500931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.501222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.501229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.501582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.501589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.501874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.501880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.502313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.502319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.502650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.502657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.502987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.502994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.503299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.503306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.503661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.503671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.504014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.504021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.504342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.504349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.727 [2024-06-10 11:52:30.504680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.727 [2024-06-10 11:52:30.504687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.727 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.505035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.505042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.505285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.505291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.505587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.505593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.505942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.505948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.506275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.506282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.506629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.506636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.506960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.506966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.507312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.507318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.507645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.507652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.507971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.507977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.508310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.508316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.508648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.508654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.508979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.508987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.509315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.509322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.509696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.509704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.509750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.509757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.509950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.509957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.510297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.510304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.510636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.510642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.510991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.510997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.511328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.511334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.511665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.511674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.511926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.511932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.512260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.512266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.512633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.512641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.512981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.512988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.513336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.513343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.513529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.513536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.513684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.513692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.513862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.513870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.514150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.514158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.514497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.514503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.514850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.514857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.515220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.515226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.515592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.515599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.515953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.515960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.516331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.516337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.516662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.728 [2024-06-10 11:52:30.516672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.728 qpair failed and we were unable to recover it. 00:44:01.728 [2024-06-10 11:52:30.517005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.517011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.517344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.517351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.517679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.517687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.517993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.517999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.518312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.518318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.518689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.518696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.519042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.519049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.519416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.519422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.519675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.519682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.520005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.520012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.520341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.520347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.520679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.520686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.521033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.521040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.521372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.521379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.521711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.521719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.522074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.522080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.522410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.522417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.522656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.523006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.523013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.523344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.523351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.523696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.523704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.523952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.523959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.524280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.524286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.524612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.524618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.524825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.524832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.525176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.525183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.525479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.525486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.525656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.525663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.526071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.526079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.526453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.526460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.526792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.526799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.527126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.527132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.527456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.527462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.527864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.527870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.528200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.528207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.528574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.528580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.528833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.528841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.529185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.529191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.729 qpair failed and we were unable to recover it. 00:44:01.729 [2024-06-10 11:52:30.529517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.729 [2024-06-10 11:52:30.529524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.529852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.529859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.530066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.530073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.530316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.530324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.530560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.530566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.530908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.530915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.531243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.531249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.531581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.531588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.531927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.531934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.532280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.532287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.532695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.532703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.532876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.532882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.533219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.533225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.533552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.533558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.533925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.533931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.534266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.534272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.534601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.534609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.534942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.534950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.535370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.535377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.535733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.535741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.536086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.536093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.536418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.536424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.536751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.536758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.537013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.537388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.537394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.537626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.537632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.538096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.538103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.538301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.538308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.538529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.538536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.538880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.538888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.539236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.539244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.539615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.539621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.539958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.539966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.540291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.540298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.540623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.730 [2024-06-10 11:52:30.540629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.730 qpair failed and we were unable to recover it. 00:44:01.730 [2024-06-10 11:52:30.540967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.540974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.541316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.541323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.541647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.541654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.541989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.541996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.542371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.542378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.542699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.542707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.543057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.543063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.543471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.543477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.543721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.543728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.544096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.544102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.544429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.544435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.544759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.544766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.545141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.545147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.545481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.545487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.545818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.545825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.546153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.546160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.546495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.546502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.546900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.546908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.547254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.547261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.547586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.547593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.547930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.547937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.548181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.548189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.548538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.548545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.548874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.548880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.549209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.549215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.549547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.549553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.549884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.549891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.550241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.550248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.550614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.550621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.550841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.550847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.551174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.551182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.551431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.551437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.551783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.551790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.552119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.552125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.552445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.552451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.552778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.731 [2024-06-10 11:52:30.552785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.731 qpair failed and we were unable to recover it. 00:44:01.731 [2024-06-10 11:52:30.553118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.553124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.553452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.553458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.553804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.553811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.554138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.554144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.554475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.554482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.554858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.554865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.555217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.555224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.555570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.555576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.555929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.555936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.556334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.556340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.556672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.556678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.557043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.557049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.557457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.557464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.557869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.557897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.558239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.558247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.558599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.558606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.558807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.558815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.559150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.559157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.559488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.559495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.559903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.559910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.560093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.560103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.560417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.560424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.560757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.560764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.561093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.561100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.561428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.561434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.561764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.561773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.561998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.562005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.562344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.562350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.562692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.563051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.563058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.563395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.563402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.563749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.563755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.564170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.564176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.564508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.564514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.564841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.564848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.565183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.565190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.565562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.565569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.565912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.565919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.566262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.732 [2024-06-10 11:52:30.566268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.732 qpair failed and we were unable to recover it. 00:44:01.732 [2024-06-10 11:52:30.566594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.566602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.566783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.566791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.567030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.567037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.567363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.567370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.567612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.567624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.567965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.567972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.568306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.568312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.568640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.568646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.568972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.568979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.569342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.569349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.569716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.569723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.570120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.570126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.570448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.570454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.570780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.570787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.571113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.571120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.571448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.571454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.571781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.571789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.572199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.572206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.572535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.572542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.572896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.572903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.573234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.573240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.573568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.573574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.573817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.573824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.574226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.574234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.574554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.574561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2519851 Killed "${NVMF_APP[@]}" "$@" 00:44:01.733 [2024-06-10 11:52:30.574911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.574918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.575246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.575253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:44:01.733 [2024-06-10 11:52:30.575582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.575589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:44:01.733 [2024-06-10 11:52:30.575933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.575941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:01.733 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:01.733 [2024-06-10 11:52:30.576363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.576370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:01.733 [2024-06-10 11:52:30.576617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.576623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.576960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.576967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.577115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.577121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.577555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.577562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.577902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.577909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.578236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.733 [2024-06-10 11:52:30.578243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.733 qpair failed and we were unable to recover it. 00:44:01.733 [2024-06-10 11:52:30.578572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.578579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.578820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.578829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.579145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.579152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.579479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.579486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.579709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.579717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.580115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.580123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.580489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.580497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.580848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.580857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.581234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.581242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.581613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.581621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.581975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.581983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.582330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.582338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.582540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.582547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.582898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.582906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.583273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.583280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.583627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.583634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2520874 00:44:01.734 [2024-06-10 11:52:30.583945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.583954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2520874 00:44:01.734 [2024-06-10 11:52:30.584326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.584335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2520874 ']' 00:44:01.734 [2024-06-10 11:52:30.584709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.584719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:01.734 [2024-06-10 11:52:30.584990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.584999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:01.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:01.734 [2024-06-10 11:52:30.585350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.585360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:01.734 11:52:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:01.734 [2024-06-10 11:52:30.585730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.585740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.586083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.586091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.586237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.586245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.586596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.586604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.586962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.586970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.587339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.587346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.587697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.587707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.587991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.587999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.588332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.588343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.588624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.588632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.588992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.589000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.589353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.589360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.589686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.589693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.734 qpair failed and we were unable to recover it. 00:44:01.734 [2024-06-10 11:52:30.590040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.734 [2024-06-10 11:52:30.590049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.590283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.590290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.590519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.590527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.590869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.590878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.591244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.591252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.591602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.591609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.591968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.591976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.592197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.592204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.592525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.592534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.592898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.592906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.593257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.593264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.593579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.593587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.593889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.593897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.594147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.594155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.594498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.594506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.594914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.594922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.595284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.595293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.595641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.595650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.596069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.596077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.596432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.596440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.596778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.596786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.597034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.597042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.597393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.597401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.597768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.597776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.598094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.598101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.598408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.598415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.598778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.598786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.599118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.599125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.599459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.599467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.599813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.599821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.600176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.600184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.735 [2024-06-10 11:52:30.600533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.735 [2024-06-10 11:52:30.600541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.735 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.600932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.600939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.601311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.601317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.601667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.601678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.602017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.602023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.602350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.602357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.602703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.602710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.603078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.603085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.603465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.603472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.603829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.603837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.604223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.604230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.604559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.604566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.604904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.604911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.605272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.605279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.605610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.605617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.605958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.605965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.606293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.606299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.606627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.606634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.607043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.607051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.607388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.607395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.607740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.607747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.608085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.608091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.608423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.608430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.608821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.608828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.609180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.609186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.609513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.609521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.609937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.609944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.610274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.610281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.610612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.610620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.610951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.610959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.611309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.611316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.611520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.611526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.611872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.611879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.612216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.612223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.612585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.612591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.612935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.612942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.613131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.613138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.613515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.613523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.613778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.736 [2024-06-10 11:52:30.613785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.736 qpair failed and we were unable to recover it. 00:44:01.736 [2024-06-10 11:52:30.614136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.614144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.614501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.614508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.614815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.614822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.615150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.615157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.615484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.615490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.615846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.615854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.616230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.616237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.616563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.616569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.616910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.616917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.617244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.617252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.617575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.617583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.617838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.617846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.618097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.618104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.618423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.618430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.618759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.618766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.619141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.619148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.619474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.619481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.619823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.619831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.620164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.620171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.620502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.620509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.620847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.620855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.621226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.621233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.621433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.621440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.621786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.621793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.622080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.622087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.622415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.622422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.622751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.622760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.622990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.622998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.623347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.623354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.623678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.623685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.623931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.623937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.624291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.624297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.624545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.624552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.624873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.624880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.625222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.625229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.625564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.625571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.625966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.625973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.626299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.626306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.737 [2024-06-10 11:52:30.626639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.737 [2024-06-10 11:52:30.626646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.737 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.626989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.626996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.627323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.627331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.627691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.627698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.628052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.628058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.628262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.628270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.628612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.628618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.628947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.628954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.629322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.629329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.629630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.629636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.629986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.629993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.630321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.630327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.630657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.630664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.631044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.631052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.631398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.631405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.631613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.631621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.632008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.632015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.632361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.632368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.632695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.632702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.633027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.633034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.633236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.633244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.633583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.633591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.633958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.633965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.634264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.634272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.634262] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:44:01.738 [2024-06-10 11:52:30.634314] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:01.738 [2024-06-10 11:52:30.634630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.634639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.635058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.635065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.635451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.635459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.635837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.635844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.636176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.636183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.636535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.636543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.636879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.636887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.637261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.637269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.637518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.637526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.637873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.637881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.638227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.638234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.638608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.638615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.638962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.638970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.639192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.738 [2024-06-10 11:52:30.639199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.738 qpair failed and we were unable to recover it. 00:44:01.738 [2024-06-10 11:52:30.639385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.639394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.639662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.639673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.640016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.640025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.640375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.640383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.640729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.640737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.641110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.641118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.641485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.641492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.641686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.641694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.641988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.641996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.642351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.642359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.642706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.642714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.643063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.643071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.643416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.643424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.643777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.643785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.644163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.644170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.644518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.644525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.644880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.644887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.645258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.645265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.645633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.645640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.645978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.645986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.646332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.646339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.646705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.646712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.647077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.647084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.647429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.647437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.647784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.647792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.648166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.648173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.648512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.648520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.648879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.648887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.649130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.649137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.649483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.649491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.649858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.649866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.650202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.650209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.650555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.650563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.650937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.650945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.651313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.651321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.651662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.651673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.652034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.652042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.652411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.739 [2024-06-10 11:52:30.652418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.739 qpair failed and we were unable to recover it. 00:44:01.739 [2024-06-10 11:52:30.652786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.652793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.653096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.653102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.653442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.653448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.653766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.653773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.654108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.654114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.654373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.654380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.654712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.654720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.655049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.655056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.655308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.655315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.655710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.655717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.656043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.656050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.656381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.656388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.656715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.656722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.657064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.657071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.657397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.657404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.657731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.657738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.658086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.658093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.658387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.658394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.658741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.658748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.659081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.659088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.659375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.659382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.659733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.659740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.660074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.660081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.660411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.660417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.660651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.660658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.661080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.661088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.661435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.661442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.661811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.661818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.662173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.662179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.662506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.662513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.662852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.662859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.663189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.663197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.740 qpair failed and we were unable to recover it. 00:44:01.740 [2024-06-10 11:52:30.663523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.740 [2024-06-10 11:52:30.663530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.663878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.663885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.664067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.664074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.664215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.664221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.664478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.664485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.664839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.664846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.665108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.665115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.665442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.665449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.665778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.665786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.666074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.666080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 EAL: No free 2048 kB hugepages reported on node 1 00:44:01.741 [2024-06-10 11:52:30.666435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.666442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.666768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.666776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.667155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.667164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.667375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.667383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.667728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.667735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.668066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.668073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.668401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.668408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.668735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.668742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.669087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.669095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.669422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.669429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.669715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.669722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.670069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.670076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.670431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.670437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.670766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.670772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.671153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.671160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.671493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.671500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.671851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.671858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.672084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.672092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.672465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.672472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.672807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.672813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.673151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.673158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.673486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.673493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.673824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.673831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:01.741 [2024-06-10 11:52:30.674209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:01.741 [2024-06-10 11:52:30.674216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:01.741 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.674593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.674602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.674937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.674945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.675318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.675325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.675706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.675714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.676022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.676029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.676364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.676371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.676745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.676752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.677088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.677094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.677421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.677427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.677744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.677752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.678090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.678097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.678433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.678440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.678783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.678790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.679166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.679173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.679550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.679557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.679914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.679921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.680163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.680170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.680526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.680532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.680900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.680908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.681236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.681242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.681573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.681579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.681818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.681825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.682082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.682089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.682436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.682442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.682697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.682704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.683037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.683043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.683458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.683465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.683646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.683655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.683998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.684006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.684378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.684384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.684711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.684718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.684933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.684940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.685304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.685310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.685662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.685673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.685995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.686003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.686347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.686354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.686662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.686673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.687014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.687020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.687349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.687355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.687609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.687615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.687940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.687947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.688318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.688324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.688651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.688658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.689060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.689067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.689377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.689385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.689748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.689755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.690107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.690113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.690442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.690448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.690775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.690782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.691189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.691195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.691444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.691450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.691789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.691795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.692125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.692132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.692466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.016 [2024-06-10 11:52:30.692472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.016 qpair failed and we were unable to recover it. 00:44:02.016 [2024-06-10 11:52:30.692799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.692807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.693153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.693160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.693446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.693452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.693874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.693880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.694113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.694129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.694389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.694396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.694630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.694637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.694992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.694999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.695366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.695373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.695708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.695715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.695983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.695989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.696216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.696223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.696563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.696570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.696619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.696626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.696858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.696865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.697206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.697212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.697539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.697545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.697879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.697886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.698285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.698292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.698619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.698626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.698975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.698982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.699311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.699317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.699685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.699691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.700044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.700051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.700307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.700314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.700640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.700647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.700991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.700998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.701397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.701403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.701737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.701744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.702085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.702092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.702273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.702280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.702621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.702627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.702959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.702966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.703292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.703298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.703529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.703536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.703690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.703698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.704059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.704065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.704391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.704398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.704725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.704732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.704893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.704901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.705303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.705310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.705686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.705694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.706053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.706060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.706386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.706393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.706718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.706727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.707064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.707070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.707395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.707402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.707724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.707731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.708072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.708078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.708489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.708496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.708819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.708827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.709065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.709073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.709436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.709444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.709814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.709821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.710133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.710140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.710371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.710378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.710717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.710724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.711101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.711108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.711358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.711364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.711601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.711608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.711985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.711993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.712351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.712359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.712674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.712682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.712900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.712908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.713287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.713294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.713632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.713639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.713987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.713994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.714238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.017 [2024-06-10 11:52:30.714246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.017 qpair failed and we were unable to recover it. 00:44:02.017 [2024-06-10 11:52:30.714584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.714591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.714931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.714939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.715265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.715272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.715642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.715649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.715901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.715909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.716252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.716259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.716587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.716594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.716941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.716948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.717274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.717281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.717605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.717612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.717976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.717983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.718308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.718315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.718645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.718653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.718983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.718991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.719410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.719418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.719772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.719780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.720105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:02.018 [2024-06-10 11:52:30.720136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.720145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.720467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.720473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.720855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.720863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.721224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.721231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.721610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.721617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.722043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.722050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.722383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.722390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.722578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.722586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.722923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.722931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.723319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.723327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.723581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.723589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.723939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.723946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.724281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.724288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.724688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.724696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.725133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.725141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.725481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.725487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.725680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.725688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.726023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.726030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.726392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.726399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.726732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.726740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.726952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.726959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.727291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.727298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.727575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.727582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.727918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.727926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.728254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.728262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.728594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.728601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.729025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.729032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.729364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.729371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.729699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.729706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.730062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.730069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.730443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.730450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.730786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.730795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.730999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.731007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.731368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.731375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.731720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.731727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.731962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.731969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.732314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.732321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.732530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.732537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.732901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.732908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.733246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.733254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.733627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.733636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.734030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.734038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.734291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.734299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.734652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.734660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.734997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.735005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.735333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.735340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.018 [2024-06-10 11:52:30.735680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.018 [2024-06-10 11:52:30.735688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.018 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.735926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.735933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.736159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.736166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.736501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.736508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.736780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.736788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.737087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.737094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.737346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.737354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.737685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.737692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.737894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.737902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.738360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.738367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.738692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.738700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.739066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.739073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.739400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.739407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.739736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.739743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.740118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.740125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.740461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.740468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.740824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.740833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.741185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.741193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.741568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.741576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.741927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.741935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.742280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.742288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.742612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.742619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.742958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.742965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.743145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.743154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.743514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.743521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.743848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.743855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.744190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.744197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.744450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.744456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.744705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.744712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.745049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.745056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.745392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.745399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.745737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.745744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.745985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.745992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.746446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.746453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.746678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.746688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.747035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.747042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.747410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.747418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.747754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.747762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.748091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.748099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.748468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.748476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.748826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.748833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.749227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.749235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.749484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.749491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.749753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.749761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.750113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.750121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.750454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.750461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.750644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.750653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.750980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.750988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.751385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.751393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.751807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.751816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.752163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.752170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.752499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.752507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.752888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.752896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.753223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.753231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.753561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.753569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.753935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.753944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.754295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.754304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.754565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.754573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.754950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.754958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.755300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.755307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.755684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.755692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.756045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.756052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.756386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.756393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.756647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.756654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.757012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.757019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.757353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.757361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.757699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.757707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.757955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.757962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.758178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.758185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.758433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.758440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.758697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.758704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.758904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.758912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.759352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.759359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.019 qpair failed and we were unable to recover it. 00:44:02.019 [2024-06-10 11:52:30.759689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.019 [2024-06-10 11:52:30.759696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.760016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.760025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.760394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.760401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.760754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.760762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.761148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.761155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.761535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.761542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.761947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.761954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.762142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.762150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.762498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.762505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.762896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.762903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.763246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.763253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.763590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.763597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.763785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.763792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.764033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.764039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.764425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.764432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.764811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.764818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.765174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.765181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.765514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.765521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.765688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.765696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.766036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.766043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.766384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.766391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.766726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.766733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.767105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.767455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.767463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.767810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.767818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.768131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.768139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.768381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.768388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.768731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.768738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.769080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.769088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.769415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.769421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.769757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.769765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.770153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.770161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.770497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.770504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.770851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.770858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.771270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.771277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.771621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.771627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.771967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.771974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.772305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.772312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.772527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.772534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.772882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.772889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.773219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.773225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.773553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.773562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.773904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.773912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.774261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.774267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.774513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.774520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.774750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.774757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.774976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.774983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.775354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.775360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.775692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.775699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.776104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.776110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.776445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.776452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.776740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.776747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.777113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.777120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.777451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.777458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.777798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.777804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.778197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.778204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.778537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.778543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.778920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.778926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.779258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.779265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.779507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.779514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.779850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.779857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.780198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.780206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.780546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.780554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.780804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.780811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.781162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.781170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.781492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.781499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.781744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.781750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.782081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.782087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.782432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.782439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.782778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.782785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.783164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.783172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.020 [2024-06-10 11:52:30.783506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.020 [2024-06-10 11:52:30.783514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.020 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.783858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.783866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.784170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.784545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.784552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.784828] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:02.021 [2024-06-10 11:52:30.784859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:02.021 [2024-06-10 11:52:30.784867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:02.021 [2024-06-10 11:52:30.784873] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:02.021 [2024-06-10 11:52:30.784879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:02.021 [2024-06-10 11:52:30.785022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.785029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.785258] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:44:02.021 [2024-06-10 11:52:30.785411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.785418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.785408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:44:02.021 [2024-06-10 11:52:30.785542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:44:02.021 [2024-06-10 11:52:30.785543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:44:02.021 [2024-06-10 11:52:30.785945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.785974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.786235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.786243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.786653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.786660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.787030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.787037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.787282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.787289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.787649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.787656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.787859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.787868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.788080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.788088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.788368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.788375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.788729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.788736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.789113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.789120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.789505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.789511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.789842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.789849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.790189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.790196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.790531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.790540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.790869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.790877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.791209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.791216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.791590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.791598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.791758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.791766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.792142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.792148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.792479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.792486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.792738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.792745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.793077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.793084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.793451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.793458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.793715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.793722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.794097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.794105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.794485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.794493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.794715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.794722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.794955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.794962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.795304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.795311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.795645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.795652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.795860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.795867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.796020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.796026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.796276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.796283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.796651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.796658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.797041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.797048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.797359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.797366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.797714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.797721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.797961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.797967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.798190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.798197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.798513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.798520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.798764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.798771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.799132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.799139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.799474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.799481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.799822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.799829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.800160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.800167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.800362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.800370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.800575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.800582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.800914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.800921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.801267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.801273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.801610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.801617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.801693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.801700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.802035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.802042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.802377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.021 [2024-06-10 11:52:30.802384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.021 qpair failed and we were unable to recover it. 00:44:02.021 [2024-06-10 11:52:30.802627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.802637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.802872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.802879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.803080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.803090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.803462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.803469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.803682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.803689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.804033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.804040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.804384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.804391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.804721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.804728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.805021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.805028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.805404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.805411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.805767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.805774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.805982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.805990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.806212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.806218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.806432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.806438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.806811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.806818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.807084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.807091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.807287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.807294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.807478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.807484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.807584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.807590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.807880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.807888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.808089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.808096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.808526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.808534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.808781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.808789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.809148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.809155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.809484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.809491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.809822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.809829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.810204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.810211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.810406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.810413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.810778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.810785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.810997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.811004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.811172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.811178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.811541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.811548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.811885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.811893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.812100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.812106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.812310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.812316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.812630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.812638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.812966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.812973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.813172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.813179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.813379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.813386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.813563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.813570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.813738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.813746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.814042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.814049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.814249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.814256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.814544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.814550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.814869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.814876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.815204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.815210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.815574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.815581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.815932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.815939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.816284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.816292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.816517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.816523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.816891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.816897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.817238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.817244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.817437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.817443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.817639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.817645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.818001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.818007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.818373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.818380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.818756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.818763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.819030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.819037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.819234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.819241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.819604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.819611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.819960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.819967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.820304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.820313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.820508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.820515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.820884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.820892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.821271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.821278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.821660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.821667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.822020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.822027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.022 [2024-06-10 11:52:30.822291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.022 [2024-06-10 11:52:30.822300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.022 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.822364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.822370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.822676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.822683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.823043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.823049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.823293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.823299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.823659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.823666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.824030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.824038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.824411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.824418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.824633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.824640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.824863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.824871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.825063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.825070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.825355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.825363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.825734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.825741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.826076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.826082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.826438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.826445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.826791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.826798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.827179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.827185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.827543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.827550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.827965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.827971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.828304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.828312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.828685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.828692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.829028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.829035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.829297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.829303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.829507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.829514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.829861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.829869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.830247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.830253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.830594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.830601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.830981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.830988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.831328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.831334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.831590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.831598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.831953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.831962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.832339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.832346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.832702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.832709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.833074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.833081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.833269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.833275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.833595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.833602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.833791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.833798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.834139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.834145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.834502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.834509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.834837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.834844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.835237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.835246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.835601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.835942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.835950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.836290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.836296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.836640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.836646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.836902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.836910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.837264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.837271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.837651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.837658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.838012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.838019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.838374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.838381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.838758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.838765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.839045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.839057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.839445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.839452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.839870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.839877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.840206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.840213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.840616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.840623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.840963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.840970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.841263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.841270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.841495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.841503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.841867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.841874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.842065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.842073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.842402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.842408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.023 [2024-06-10 11:52:30.842682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.023 [2024-06-10 11:52:30.842690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.023 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.843044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.843051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.843236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.843243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.843597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.843605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.843951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.843958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.844309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.844315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.844560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.844567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.844831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.844838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.845228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.845234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.845567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.845573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.845940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.845947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.846280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.846286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.846618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.846626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.846982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.846989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.847307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.847314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.847678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.847685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.847883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.847890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.848247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.848254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.848561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.848569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.848922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.848929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.849139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.849146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.849524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.849531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.849904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.849911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.850219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.850226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.850580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.850586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.850935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.850942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.851283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.851290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.851660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.851666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.852039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.852045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.852337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.852343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.852680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.852687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.853136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.853143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.853507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.853514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.853891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.853897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.854243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.854250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.854505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.854512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.854874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.854881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.855085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.855092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.855526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.855533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.855890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.855896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.856136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.856142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.856320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.856327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.856687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.856694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.857049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.857056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.857318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.857324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.857548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.857556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.857950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.857957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.858224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.858230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.858524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.858531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.858934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.858941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.859304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.859310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.859648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.859655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.859851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.859858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.860217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.860224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.860429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.860436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.860680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.860688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.861048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.861054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.861307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.861314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.861622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.861630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.861970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.861977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.862321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.862327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.862605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.862612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.863002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.863009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.863337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.863343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.863678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.863685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.863866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.863873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.864289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.864295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.864487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.864496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.024 [2024-06-10 11:52:30.864825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.024 [2024-06-10 11:52:30.864832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.024 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.865222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.865228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.865435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.865442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.865661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.865672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.865938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.865945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.866326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.866333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.866512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.866519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.866854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.866861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.867086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.867092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.867439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.867445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.867635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.867643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.867870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.867877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.868223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.868230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.868567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.868574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.868975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.868982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.869167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.869176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.869556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.869563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.869805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.869812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.870106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.870113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.870432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.870438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.870906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.870914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.871091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.871098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.871444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.871450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.871781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.871788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.872177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.872184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.872517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.872524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.872776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.872783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.873143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.873149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.873485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.873492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.873841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.873848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.874178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.874186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.874445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.874453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.874652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.874659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.874827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.874835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.875039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.875047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.875399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.875405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.875756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.875762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.876138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.876145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.876485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.876740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.876748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.876928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.876935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.877286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.877295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.877630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.877637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.877831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.877838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.878209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.878216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.878561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.878567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.878914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.878920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.879250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.879257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.879592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.879598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.879939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.879946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.880278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.880284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.880490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.880496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.880864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.880873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.881251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.881259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.881596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.881604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.882032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.882040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.882325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.882331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.882712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.882719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.882945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.882951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.883316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.883322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.883696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.883703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.884097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.884104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.884434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.884441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.884788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.884795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.885270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.885276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.885610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.885618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.885888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.885895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.886234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.886241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.886579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.886586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.886771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.886778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.887149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.025 [2024-06-10 11:52:30.887158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.025 qpair failed and we were unable to recover it. 00:44:02.025 [2024-06-10 11:52:30.887560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.887567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.887748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.887755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.888094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.888101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.888435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.888442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.888661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.888677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.889036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.889042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.889249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.889256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.889625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.889632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.889976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.889983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.890276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.890283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.890470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.890477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.890710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.890719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.891005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.891012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.891327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.891333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.891515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.891523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.891840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.891847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.892207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.892213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.892549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.892557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.892905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.892913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.893317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.893323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.893508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.893515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.893883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.893891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.894227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.894234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.894570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.894576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.894881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.894887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.895249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.895255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.895589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.895596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.895966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.895973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.896307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.896313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.896644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.896651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.896968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.896975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.897352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.897359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.897730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.897905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.897912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.898260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.898267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.898522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.898529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.898879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.898887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.899217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.899224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.899365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.899371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.899761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.899770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.900124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.900131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.900466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.900473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.900808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.900815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.901152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.901159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.901414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.901421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.901773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.901780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.902119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.902126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.902457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.902464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.902795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.902802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.903143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.903150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.903334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.903342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.903615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.903622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.903978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.903986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.904342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.904348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.904681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.904688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.026 qpair failed and we were unable to recover it. 00:44:02.026 [2024-06-10 11:52:30.905033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.026 [2024-06-10 11:52:30.905040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.905235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.905242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.905558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.905564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.905900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.905907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.906259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.906265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.906450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.906457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.906809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.906816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.907215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.907222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.907553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.907559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.907928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.907935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.908117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.908124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.908472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.908480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.908854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.908861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.909191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.909198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.909527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.909533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.909587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.909592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.909825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.909833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.910210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.910217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.910566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.910573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.910929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.910936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.911121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.911128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.911436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.911442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.911788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.911795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.912134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.912142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.912475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.912483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.912815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.912822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.912871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.912877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.913206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.913213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.913507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.913513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.913847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.913854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.914193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.914201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.914517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.914524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.914837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.914845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.915160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.915168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.915506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.915513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.915829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.915836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.916202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.916209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.916544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.916551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.916881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.916888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.917239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.917246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.917462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.917470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.917870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.917877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.918225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.918232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.918439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.918445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.918804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.918811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.919162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.919169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.919514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.919521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.919863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.919870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.920112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.920119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.920451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.920459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.920851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.920858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.921051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.921059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.921240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.921246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.921307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.921313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.921663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.921673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.922006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.922013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.922205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.922212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.922413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.922419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.922642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.922649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.922998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.923006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.923387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.923394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.923712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.923719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.923880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.923887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.924235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.924241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.924590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.924598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.924812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.924819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.925015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.925022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.925430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.925437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.925770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.925777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.926026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.926034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.926385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.926391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.926741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.926749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.027 [2024-06-10 11:52:30.927089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.027 [2024-06-10 11:52:30.927096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.027 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.927327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.927334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.927526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.927532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.927804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.927811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.928159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.928166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.928534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.928542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.928904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.928911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.929095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.929102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.929507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.929514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.929937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.929944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.930332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.930339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.930565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.930572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.930922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.930929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.931280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.931286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.931618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.931625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.931883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.931890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.932262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.932268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.932494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.932501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.932837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.932844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.933215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.933222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.933601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.933608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.933969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.933977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.934322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.934328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.934701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.934708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.935148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.935155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.935537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.935544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.935946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.935953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.936174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.936182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.936404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.936411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.936662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.936672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.937008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.937014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.937395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.937403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.937602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.937612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.937814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.937820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.937950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.937957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.938326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.938334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.938596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.938604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.939007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.939014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.939211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.939218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.939580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.939587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.939963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.939970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.940365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.940372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.940571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.940578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.940629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.940635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.940964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.940971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.941325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.941332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.941658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.941665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.941855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.941862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.942251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.942259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.942611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.942618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.942786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.942794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.942991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.942999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.943238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.943245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.943625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.943632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.943955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.943962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.944163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.944171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.944544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.944551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.944897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.944904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.945226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.945232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.945521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.945528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.028 qpair failed and we were unable to recover it. 00:44:02.028 [2024-06-10 11:52:30.945861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.028 [2024-06-10 11:52:30.945868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.946323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.946330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.946512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.946520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.946688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.946696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.947038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.947044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.947376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.947384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.947737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.947744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.947981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.947995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.948361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.948368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.948699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.948706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.949043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.949049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.949238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.949245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.949563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.949572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.949755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.949763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.949959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.949965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.950192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.950198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.950554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.950560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.950761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.950768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.951168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.951174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.951556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.951563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.951927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.951934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.952314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.952320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.952612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.952618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.952853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.952859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.953205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.953211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.953271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.953277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.953599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.953606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.953657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.953662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.954000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.954007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.954054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.954060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.954393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.954400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.954772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.954779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.955097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.955104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.955454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.955461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.955675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.955683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.956042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.956049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.956397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.956403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.956739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.956746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.957090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.957097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.957316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.957323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.957610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.957618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.957860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.957867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.958261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.958267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.958612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.958619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.958981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.958988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.959317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.959323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.959405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.959410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.959657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.959664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.960019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.960027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.960227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.960234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.960317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.960325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.960652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.960659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.961041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.961049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.961215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.961222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.961609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.961616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.961853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.961860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.962198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.962205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.962414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.962421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.962662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.962672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.963079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.963086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.963265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.963272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.963423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.963429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.963634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.963640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.963876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.963883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.964273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.964280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.964653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.964660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.964904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.964911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.965288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.029 [2024-06-10 11:52:30.965296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.029 qpair failed and we were unable to recover it. 00:44:02.029 [2024-06-10 11:52:30.965616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.965623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.965975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.965982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.966315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.966323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.966696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.966703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.966914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.966921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.967270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.967278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.967638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.967645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.967963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.967970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.968201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.968208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.968592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.968600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.968960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.968968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.969326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.969333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.969709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.969716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.970088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.970096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.970451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.970458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.970856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.970863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.971221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.971228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.971565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.971571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.971971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.971977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.972305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.972313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.972520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.972527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.972698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.972705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.973026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.973033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.973378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.973385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.030 [2024-06-10 11:52:30.973598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.030 [2024-06-10 11:52:30.973608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.030 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.973796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.973805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.974125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.974133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.974486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.974494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.974870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.974877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.975217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.975224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.975563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.975569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.975940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.975947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.976280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.976286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.976500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.976506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.976706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.976714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.977048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.977055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.977391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.977399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.977608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.977615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.977917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.977924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.978266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.978272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.978603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.978610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.978868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.978876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.979106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.979113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.979455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.979463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.979589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.979595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.979902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.979909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.980283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.980289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.980543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.980550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.980903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.980909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.981245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.981252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.981601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.981608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.981854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.981863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.982237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.982244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.982617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.982624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.982996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.983003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.983337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.983344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.983529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.983536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.983854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.983862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.984203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.304 [2024-06-10 11:52:30.984209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.304 qpair failed and we were unable to recover it. 00:44:02.304 [2024-06-10 11:52:30.984546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.984553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.984748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.984754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.985127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.985133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.985291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.985299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.985656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.985662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.986081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.986088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.986420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.986427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.986679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.986686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.986901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.987273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.987279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.987612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.987618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.987972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.987980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.988187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.988194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.988376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.988383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.988807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.988814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.989161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.989167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.989493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.989500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.989832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.989839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.990195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.990343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.990350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.990560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.990566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.990898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.990905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.991243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.991250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.991437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.991444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.991801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.991808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.992155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.992162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.992426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.992434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.992783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.992790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.993166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.993173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.993505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.993511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.993694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.993702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.993999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.994006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.994366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.994375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.994713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.994720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.995071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.995079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.995436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.995443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.995783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.995790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.996225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.996232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.996591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.996599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.997036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.997043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.997300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.997307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.997736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.997743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.997905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.997911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.998278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.998284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.998620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.998627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.998973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.998980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.999281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.999288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.999617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.999624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:30.999876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:30.999883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.000257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.000264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.000483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.000491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.000842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.000849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.001199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.001206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.001402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.001408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.001646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.001652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.002034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.002040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.002372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.002378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.002718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.002724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.002971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.002978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.003376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.003383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.003810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.003817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.003981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.003987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.004412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.004419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.004772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.004779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.005048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.005055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.005246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.005254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.005606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.005613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.005962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.005969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.006220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.006227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.006595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.006602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.006823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.006830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.007174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.305 [2024-06-10 11:52:31.007181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.305 qpair failed and we were unable to recover it. 00:44:02.305 [2024-06-10 11:52:31.007511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.007520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.007968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.007975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.008314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.008321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.008626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.008634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.008987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.008993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.009325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.009333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.009706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.009713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.010051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.010058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.010292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.010299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.010539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.010545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.010923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.010930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.011278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.011285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.011532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.011538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.011590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.011596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.011980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.011987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.012336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.012343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.012675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.012682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.012945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.012952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.013166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.013173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.013507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.013513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.013845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.013853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.014070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.014076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.014431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.014437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.014690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.014697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.015039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.015045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.015375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.015381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.015711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.015719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.016141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.016147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.016486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.016492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.016856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.016863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.017206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.017213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.017399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.017405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.017715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.017722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.017962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.017968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.018335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.018342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.018546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.018552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.018736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.018742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.019077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.019083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.019438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.019444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.019650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.019656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.020036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.020045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.020389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.020396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.020772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.020779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.020995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.021001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.021241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.021247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.021628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.021634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.021953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.021961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.022151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.022157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.022516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.022523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.022826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.022834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.023022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.023029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.023326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.023333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.023384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.023391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.023712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.023719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.024032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.024039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.024358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.024364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.024579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.024585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.024793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.024800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.025298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.025304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.025636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.025644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.026036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.026043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.026378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.026385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.306 [2024-06-10 11:52:31.026709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.306 [2024-06-10 11:52:31.026716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.306 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.027061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.027068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.027409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.027416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.027715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.027722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.028039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.028046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.028439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.028447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.028692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.028699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.028894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.028900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.029294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.029300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.029635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.029642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.029810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.029818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.030012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.030019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.030189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.030197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.030553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.030560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.030912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.030919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.031255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.031261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.031591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.031597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.031941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.031948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.032329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.032339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.032717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.032724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.033093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.033100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.033444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.033451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.033824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.033831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.034024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.034031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.034356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.034363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.034566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.034573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.034913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.034920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.035260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.035266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.035449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.035456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.035839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.035845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.036029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.036036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.036348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.036354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.036687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.036694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.037026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.037034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.037369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.037375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.037748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.037755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.038138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.038144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.038490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.038496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.038840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.038846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.039010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.039018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.039276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.039283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.039493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.039499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.039678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.039685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.040018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.040024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.040225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.040231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.040531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.040537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.040698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.040705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.041060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.041067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.041269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.041276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.041624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.041630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.041847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.041854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.042236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.042242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.042579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.042586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.042934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.042942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.043156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.043162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.043381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.043387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.043727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.043734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.044093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.044099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.044294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.044303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.044675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.044682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.045061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.045068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.045284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.307 [2024-06-10 11:52:31.045290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.307 qpair failed and we were unable to recover it. 00:44:02.307 [2024-06-10 11:52:31.045661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.045675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.045937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.045944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.046287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.046293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.046489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.046496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.046803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.046810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.047163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.047169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.047512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.047520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.047871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.047878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.048082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.048089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.048289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.048296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.048466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.048472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.048791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.048798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.048994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.049001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.049249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.049256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.049568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.049575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.049783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.049790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.050185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.050192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.050527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.050533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.050865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.050872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.051213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.051220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.051556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.051564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.051956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.051963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.052316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.052322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.052508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.052515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.052832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.052839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.053180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.053186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.053526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.053533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.053910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.053918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.054072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.054086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.054310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.054317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.054505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.054511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.054824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.054831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.055259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.055265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.055604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.055610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.055958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.055966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.056320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.056327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.056701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.056711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.057036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.057042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.057295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.057302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.057648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.057655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.058030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.058038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.058368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.058375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.058712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.058719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.059091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.059098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.059447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.059453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.059876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.059883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.060218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.060225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.060417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.060424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.060694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.060701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.061026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.061032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.061369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.061376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.061705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.061712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.061919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.061927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.062221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.062227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.062552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.062559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.062926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.062933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.063266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.063272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.063611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.063618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.063977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.063985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.064188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.064195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.064532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.064538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.064897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.064904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.065263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.065269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.065607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.065614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.065965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.065972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.066179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.066185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.066434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.066441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.066829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.066836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.067171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.067179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.308 [2024-06-10 11:52:31.067513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.308 [2024-06-10 11:52:31.067520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.308 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.067874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.067881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.068198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.068205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.068579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.068586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.068993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.069000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.069256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.069263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.069579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.069586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.069931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.069939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.070323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.070329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.070680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.070687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.071108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.071114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.071453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.071461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.071651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.071657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.071985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.071992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.072200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.072207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.072596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.072602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.072860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.072867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.073186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.073192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.073612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.073619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.074043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.074051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.074243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.074250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.074606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.074612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.074942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.074949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.075134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.075141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.075459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.075466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.075675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.075683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.075886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.075893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.076309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.076315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.076576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.076583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.076931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.076938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.077162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.077168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.077400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.077407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.077733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.077740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.078006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.078013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.078188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.078195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.078547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.078553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.078906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.078913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.079244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.079250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.079582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.079588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.079944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.079951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.080296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.080302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.080636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.080642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.081050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.081058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.081435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.081442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.081820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.081827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.082171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.082178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.082353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.082360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.082554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.082562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.082907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.082914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.083246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.083252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.083625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.083632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.083928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.083935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.084313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.084321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.084673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.084681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.084921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.084928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.085129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.085135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.085543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.085550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.085801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.085808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.086112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.086119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.086468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.309 [2024-06-10 11:52:31.086474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.309 qpair failed and we were unable to recover it. 00:44:02.309 [2024-06-10 11:52:31.086726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.086734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.086960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.086967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.087189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.087196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.087576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.087582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.087775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.087783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.088103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.088109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.088445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.088451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.088709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.088716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.088906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.088912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.089089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.089096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.089484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.089490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.089821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.089828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.090181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.090187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.090594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.090601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.091001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.091008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.091269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.091275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.091619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.091625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.091873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.091880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.092206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.092212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.092421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.092428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.092787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.092794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.093166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.093172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.093508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.093515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.093898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.093905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.094237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.094243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.094572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.094578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.094946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.094954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.095326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.095334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.095596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.095603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.095967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.095974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.096322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.096329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.096737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.096743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.097001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.097008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.097372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.097379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.097708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.097715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.098097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.098103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.098433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.098440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.098645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.098652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.099029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.099037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.099443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.099449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.099804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.099811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.100006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.100014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.100426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.100433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.100757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.100763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.101152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.101159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.101343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.101350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.101722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.101730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.102084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.102091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.102426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.102433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.102757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.102764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.103077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.103083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.103431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.103438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.103815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.103821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.104244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.104250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.104637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.104644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.104989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.104996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.105369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.105376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.105725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.105732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.106073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.106080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.106498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.106505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.106868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.106875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.107186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.107192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.107629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.107635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.107842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.107849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.108174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.108181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.108438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.108445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.108868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.108875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.109303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.109311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.109659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.109666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.110021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.110027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.110411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.110418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.110788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.110795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.310 [2024-06-10 11:52:31.111146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.310 [2024-06-10 11:52:31.111153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.310 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.111365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.111373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.111689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.111696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.112091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.112099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.112438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.112444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.112774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.112781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.113147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.113154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.113485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.113491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.113676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.113684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.114032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.114039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.114380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.114386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.114572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.114578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.114785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.114793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.115026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.115033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.115349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.115356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.115697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.115704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.116056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.116063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.116427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.116434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.116768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.116775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.116990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.116996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.117283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.117290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.117486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.117493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.117823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.117831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.118220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.118227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.118381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.118388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.118552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.118559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.118835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.118843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.119219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.119225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.119555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.119562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.119959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.119965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.120294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.120301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.120503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.120509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.121018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.121024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.121266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.121274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.121524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.121531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.121878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.121886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.122295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.122301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.122485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.122492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.122863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.122871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.123207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.123213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.123542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.123548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.123885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.123892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.124253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.124260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.124519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.124525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.124733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.124740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.125183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.125190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.125542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.125549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.125744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.125752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.126136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.126142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.126486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.126492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.126695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.126702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.127004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.127012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.127358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.127364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.127619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.127625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.128070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.128077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.128406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.128412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.128568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.128574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.128749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.128755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.128933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.128940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.129270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.129277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.129654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.129662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.130028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.130035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.130283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.130290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.130630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.130636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.131022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.131029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.131320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.131326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.131684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.131691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.132034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.132041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.132372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.132379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.132729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.132736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.132790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.311 [2024-06-10 11:52:31.132796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.311 qpair failed and we were unable to recover it. 00:44:02.311 [2024-06-10 11:52:31.132994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.133001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.133063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.133069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.133346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.133353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.133687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.133694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.134051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.134059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.134391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.134397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.134598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.134604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.135015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.135351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.135358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.135691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.135699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.136051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.136058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.136390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.136396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.136616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.136622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.136815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.136821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.137141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.137147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.137477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.137484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.137679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.137685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.138049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.138055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.138433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.138440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.138774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.138781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.139134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.139141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.139336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.139342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.139663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.139674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.140009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.140016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.140346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.140353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.140562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.140569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.141002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.141009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.141340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.141347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.141705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.141712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.142043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.142051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.142304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.142311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.142682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.142693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.142883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.142889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.143202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.143209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.143347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.143353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.143551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.143558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.143937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.143944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.144274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.144281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.144656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.144662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.145006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.145013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.145350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.145356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.145699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.145707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.146016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.146023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.146377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.146383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.146760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.146767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.147100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.147107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.147197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.147203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.147522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.147528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.147866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.147873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.148184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.148190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.148549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.148555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.148811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.148819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.149186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.149194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.149542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.149549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.149900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.149906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.150237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.150243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.150579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.150585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.150781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.150789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.151047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.151054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.151461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.151467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.151832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.151838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.152168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.312 [2024-06-10 11:52:31.152174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.312 qpair failed and we were unable to recover it. 00:44:02.312 [2024-06-10 11:52:31.152506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.152512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.152855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.152862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.153057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.153063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.153460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.153466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.153797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.153805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.154146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.154153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.154525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.154533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.154889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.154897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.155231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.155238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.155575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.155583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.155795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.155802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.155989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.155996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.156341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.156347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.156677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.156684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.157070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.157077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.157392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.157399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.157730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.157738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.158113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.158119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.158165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.158171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.158463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.158470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.158658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.158664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.159041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.159049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.159383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.159389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.159725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.159732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.160064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.160070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.160399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.160405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.160628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.160635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.160816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.160822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.161133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.161140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.161377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.161385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.161739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.161745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.162104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.162110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.162442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.162448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.162780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.162788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.163113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.163120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.163498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.163505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.163694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.163702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.163968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.163975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.164344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.164350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.164422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.164428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.164822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.164828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.165167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.165174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.165381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.165389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.165757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.165765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.166140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.166147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.166482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.166488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.166823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.166830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.167171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.167178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.167500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.167507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.167869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.167878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.168234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.168241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.168639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.168645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.168989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.168995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.169325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.169331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.169662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.169672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.170042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.170048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.170256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.170264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.170580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.170586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.170927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.170934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.171274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.171280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.171610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.171617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.171993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.172000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.172182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.172189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.172386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.172393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.172724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.172730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.173095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.173101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.173448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.173454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.173790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.173797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.174139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.313 [2024-06-10 11:52:31.174146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.313 qpair failed and we were unable to recover it. 00:44:02.313 [2024-06-10 11:52:31.174479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.174485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.174697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.174704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.174754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.174760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.175072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.175080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.175457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.175464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.175815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.175822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.176054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.176061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.176424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.176432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.176763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.176770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.177087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.177094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.177467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.177474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.177866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.177873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.178208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.178215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.178545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.178552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.178904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.178912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.179234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.179242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.179432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.179438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.179789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.179796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.179953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.179961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.180165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.180173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.180569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.180577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.180923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.180931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.181273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.181280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.181608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.181616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.181942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.181950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.182343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.182352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.182702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.182709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.182884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.182891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.183272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.183278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.183648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.183654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.183999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.184006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.184348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.184354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.184531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.184538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.184816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.184823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.185154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.185161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.185490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.185496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.185701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.185709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.186072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.186080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.186397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.186404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.186655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.186662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.186980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.186987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.187233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.187240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.187578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.187584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.187932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.187940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.188270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.188277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.188522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.188529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.188935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.188942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.189162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.189170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.189383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.189390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.189814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.189822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.190008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.190015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.190356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.190362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.190701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.190708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.191028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.191034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.191371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.191377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.191569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.191575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.191934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.191942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.192279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.314 [2024-06-10 11:52:31.192286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.314 qpair failed and we were unable to recover it. 00:44:02.314 [2024-06-10 11:52:31.192696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.192703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.193046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.193054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.193441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.193449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.193781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.193788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.193990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.193996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.194265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.194272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.194459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.194465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.194783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.194790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.195159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.195166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.195365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.195372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.195754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.195762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.195981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.195989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.196418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.196426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.196770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.196777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.197127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.197133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.197319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.197327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.197658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.197666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.198047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.198054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.198259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.198265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.198674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.198681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.199004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.199011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.199350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.199356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.199540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.199547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.199776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.199784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.200168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.200174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.200533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.200541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.200882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.200889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.201069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.201076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.201455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.201461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.201645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.201653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.202061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.202068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.202317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.202324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.202679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.202686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.203039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.203047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.203394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.203400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.203593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.203599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.203967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.203975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.204315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.204321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.204698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.204705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.204949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.204957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.205191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.205198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.205546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.205552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.206027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.206038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.206208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.206215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.206467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.206474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.206808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.206815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.207169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.207175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.207508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.207515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.207854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.207861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.208176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.208183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.208591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.208598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.208999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.209006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.209345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.209353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.209727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.209734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.209920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.210224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.210231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.210667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.210678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.210903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.210911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.211240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.211246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.211581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.211588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.211783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.211791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.212010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.212017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.212403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.212409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.212745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.212752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.213139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.213146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.213472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.213479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.213636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.315 [2024-06-10 11:52:31.213642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.315 qpair failed and we were unable to recover it. 00:44:02.315 [2024-06-10 11:52:31.213951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.213958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.214312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.214318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.214649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.214656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.215033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.215041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.215433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.215440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.215875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.215882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.216218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.216225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.216414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.216421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.216744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.216751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.217013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.217020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.217449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.217456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.217786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.217792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.218147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.218153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.218485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.218492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.218822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.218829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.219193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.219202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.219656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.219663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.219926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.219933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.220266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.220273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.220615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.220622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.220988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.220995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.221333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.221340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.221679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.221686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.222012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.222019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.222462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.222468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.222822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.222829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.223013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.223021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.223412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.223420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.223817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.223825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.224191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.224198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.224550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.224556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.224741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.224749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.225067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.225074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.225406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.225413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.225757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.225765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.226169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.226176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.226503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.226509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.226839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.226846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.227177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.227184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.227477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.227483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.227661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.227672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.227904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.227912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.228254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.228261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.228646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.228654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.229010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.229017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.229348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.229354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.229525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.229532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.229907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.229915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.230260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.230266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.230517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.230523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.230908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.230915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.231246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.231252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.231581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.231588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.231766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.231773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.232014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.232021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.232229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.232237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.232608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.232614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.232878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.232885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.233235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.233242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.233579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.233586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.233830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.233837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.234209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.234215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.234591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.234598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.234946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.234953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.235286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.235292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.235551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.235558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.235757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.235765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.316 qpair failed and we were unable to recover it. 00:44:02.316 [2024-06-10 11:52:31.236101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.316 [2024-06-10 11:52:31.236108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.236411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.236417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.236763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.236770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.236967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.236974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.237300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.237307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.237639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.237645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.237983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.237991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.238159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.238166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.238590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.238597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.238938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.238945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.239323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.239329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.239656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.239663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.240014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.240020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.240216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.240224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.240584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.240591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.240936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.240943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.241272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.241278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.241532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.241539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.241755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.241762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.241980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.241987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.242359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.242365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.242704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.242712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.243031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.243037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.243406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.243412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.243755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.243762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.243935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.243942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.244302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.244308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.244688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.244696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.244978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.244986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.245345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.245351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.245701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.245708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.246074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.246080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.246131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.246137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.246472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.246478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.246810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.246818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.247177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.247183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.247517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.247524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.247723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.247730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.248143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.248150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.248492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.248500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.248874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.248881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.249218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.249225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.249428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.249435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.249701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.249707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.250056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.250062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.250397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.250404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.250735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.250742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.250966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.250974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.251346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.251353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.251688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.251695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.252122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.252129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.252470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.252476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.252819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.252825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.253178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.253184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.253525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.253532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.253867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.253874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.254135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.317 [2024-06-10 11:52:31.254142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.317 qpair failed and we were unable to recover it. 00:44:02.317 [2024-06-10 11:52:31.254506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.254513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.254941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.254949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.255286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.255292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.255628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.255635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.255999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.256005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.256349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.256356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.256695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.256701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.257049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.257056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.257427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.257435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.257784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.257792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.258045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.258052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.258391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.258399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.258600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.258606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.258852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.258866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.259118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.259125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.259470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.259476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.259809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.259816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.260158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.260165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.260498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.260505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.260715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.260723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.261100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.261107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.261486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.261493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.318 [2024-06-10 11:52:31.261829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.318 [2024-06-10 11:52:31.261837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.318 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.262215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.262225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.262558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.262565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.262749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.262757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.262999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.263006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.263236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.263242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.263608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.263615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.263820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.263827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.264180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.264186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.264406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.264414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.264773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.264780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.265001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.265008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.265218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.265225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.265403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.265409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.265677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.265685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.266053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.266060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.266413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.266420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.266608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.266614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.266851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.266858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.267041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.267047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.267479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.267486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.267816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.267823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.268133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.268139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.268437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.268444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.268832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.668 [2024-06-10 11:52:31.268839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.668 qpair failed and we were unable to recover it. 00:44:02.668 [2024-06-10 11:52:31.269036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.269042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.269362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.269368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.269732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.269739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.270055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.270062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.270445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.270455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.270816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.270823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.271153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.271160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.271460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.271466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.271835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.271842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.272051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.272057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.272251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.272257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.272584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.272590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.272990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.272997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.273328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.273335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.273513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.273520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.273852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.273858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.274189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.274196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.274542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.274549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.274895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.274902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.275248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.275255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.275594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.275600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.275943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.275950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.276154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.276160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.276372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.276379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.276701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.276709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.277089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.277096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.277427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.277433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.277722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.277729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.277921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.277928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.278180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.278187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.278546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.278552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.278816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.278823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.279083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.279089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.279321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.279327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.279675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.279682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.280041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.280049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.280237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.280245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.669 qpair failed and we were unable to recover it. 00:44:02.669 [2024-06-10 11:52:31.280481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.669 [2024-06-10 11:52:31.280489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.280811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.280818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.281168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.281175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.281355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.281363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.281729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.281736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.282014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.282021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.282327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.282333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.282685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.282694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.283072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.283079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.283285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.283291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.283649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.283656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.284043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.284050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.284282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.284289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.284486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.284494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.284683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.284691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.285017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.285024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.285354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.285361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.285566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.285574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.285753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.285760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.286038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.286045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.286377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.286383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.286722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.286729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.287072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.287078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.287219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.287226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.287636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.287643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.287993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.288001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.288383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.288391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.288502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.288509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.288683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.288690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.289017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.289024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.289353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.289359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.289693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.289701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.670 qpair failed and we were unable to recover it. 00:44:02.670 [2024-06-10 11:52:31.290054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.670 [2024-06-10 11:52:31.290061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.290229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.290237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.290659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.290666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.290873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.290880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.291259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.291266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.291619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.291627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.292008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.292015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.292389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.292396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.292729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.292736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.292989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.292996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.293346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.293354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.293725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.293732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.294120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.294126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.294456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.294462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.294792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.294800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.295001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.295010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.295378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.295384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.295714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.295721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.296052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.296059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.296390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.296397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.296688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.296695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.297049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.297057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.297251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.297258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.297549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.297556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.297900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.297907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.298255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.298262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.298605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.298612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.298957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.298963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.299165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.299171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.299539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.299545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.299912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.299918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.300249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.300256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.300460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.300467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.300675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.300683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.301073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.301080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.301336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.301342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.301685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.301692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.302025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.302032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.671 qpair failed and we were unable to recover it. 00:44:02.671 [2024-06-10 11:52:31.302362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.671 [2024-06-10 11:52:31.302369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.302701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.302708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.303068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.303074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.303283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.303290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.303503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.303511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.303924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.303931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.304310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.304316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.304523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.304530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.304933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.304941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.305125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.305131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.305301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.305308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.305681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.305689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.305919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.305926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.306287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.306294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.306490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.306497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.306758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.306766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.307043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.307051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.307422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.307429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.307802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.307809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.307995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.308001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.308051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.308059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.308340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.308346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.308572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.308580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.308938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.308945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.309294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.309301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.309483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.309490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.309815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.309823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.310195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.310201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.310539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.310546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.310883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.310890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.311087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.311093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.311469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.311476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.311844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.311851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.312195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.672 [2024-06-10 11:52:31.312202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.672 qpair failed and we were unable to recover it. 00:44:02.672 [2024-06-10 11:52:31.312535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.312542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.312920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.312927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.313373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.313379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.313439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.313446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.313679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.313687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.313898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.313906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.314096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.314103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.314432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.314439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.314767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.314774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.315030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.315037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.315419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.315428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.315760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.315767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.315949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.315956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.316374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.316381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.316715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.316722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.316929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.316936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.317195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.317202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.317590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.317597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.317792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.317799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.317965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.317972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.318298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.318305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.318635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.318642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.319021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.319029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.319323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.319330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.319496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.319503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.319766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.319774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.320101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.320109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.320469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.320476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.673 [2024-06-10 11:52:31.320729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.673 [2024-06-10 11:52:31.320736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.673 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.321091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.321098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.321428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.321435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.321757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.321764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.321988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.321995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.322192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.322198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.322542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.322549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.322880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.322888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.323222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.323228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.323558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.323564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.323749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.323756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.324158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.324165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.324495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.324502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.324844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.324852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.325032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.325039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.325193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.325200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.325432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.325438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.325777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.325784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.326161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.326168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.326343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.326351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.326596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.326603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.326941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.326948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.327359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.327368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.327704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.327711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.328071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.328078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.328412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.328419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.328667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.328678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.328862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.328869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.329126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.329133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.329551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.329558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.329788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.329795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.329996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.330003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.330204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.330211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.330523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.330530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.330920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.330927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.331266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.331272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.331565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.331571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.331803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.331810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.674 [2024-06-10 11:52:31.332102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.674 [2024-06-10 11:52:31.332110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.674 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.332445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.332452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.332808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.332814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.333033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.333039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.333406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.333413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.333755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.333762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.333948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.333955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.334272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.334278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.334614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.334620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.334964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.334971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.335310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.335316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.335696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.335702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.335906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.335913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.336100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.336107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.336446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.336453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.336796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.336802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.337147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.337155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.337528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.337535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.337786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.337794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.338012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.338019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.338215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.338222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.338560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.338566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.338910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.338917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.339248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.339255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.339452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.339460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.339834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.340155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.340161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.340502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.340508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.340898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.340905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.341108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.341115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.341452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.341459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.341808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.341814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.342241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.342248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.342423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.342431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.342610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.342616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.342951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.342958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.343293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.343299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.343486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.343493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.343730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.343737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.675 qpair failed and we were unable to recover it. 00:44:02.675 [2024-06-10 11:52:31.344150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.675 [2024-06-10 11:52:31.344156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.344487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.344493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.344826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.344833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.345173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.345179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.345511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.345518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.345864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.345870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.345920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.345927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.346203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.346210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.346397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.346404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.346790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.346798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.347138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.347146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.347519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.347526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.347880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.347887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.348076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.348083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.348437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.348444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.348819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.348827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.349177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.349183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.349517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.349524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.349860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.349866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.350027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.350034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.350398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.350404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.350634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.350641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.350860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.350868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.351250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.351256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.351603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.351609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.351950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.351959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.352176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.352182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.352437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.352444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.352640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.352647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.352973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.352980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.353351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.353358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.353591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.353597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.353825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.353832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.354025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.354032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.354384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.354392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.354607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.354615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.355036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.355043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.355246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.355253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.355471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.676 [2024-06-10 11:52:31.355478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.676 qpair failed and we were unable to recover it. 00:44:02.676 [2024-06-10 11:52:31.355824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.355831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.356213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.356221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.356577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.356584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.356938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.356944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.357289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.357295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.357545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.357552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.357762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.357769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.358054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.358061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.358422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.358429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.358800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.358807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.359142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.359149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.359400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.359407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.359752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.359759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.360129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.360136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.360474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.360481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.360813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.360820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.361015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.361022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.361332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.361338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.361493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.361500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.361997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.362004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.362188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.362195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.362472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.362479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.362808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.362815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.363156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.363162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.363493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.363501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.363831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.363838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.364021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.364029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.364275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.364281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.364573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.364580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.364930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.364937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.365285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.365291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.365710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.365717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.677 [2024-06-10 11:52:31.366049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.677 [2024-06-10 11:52:31.366056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.677 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.366398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.366405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.366750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.366758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.367113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.367119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.367410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.367417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.367773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.367780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.368117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.368124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.368310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.368316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.368648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.368655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.368953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.368960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.369295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.369302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.369361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.369368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.369686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.369693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.370115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.370122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.370453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.370460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.370877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.370884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.371221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.371228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.371566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.371573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.371944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.371952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.372380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.372388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.372582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.372588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.373002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.373009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.373344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.373352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.373759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.373766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.374102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.374109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.374447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.374454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.374788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.374797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.375148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.375156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.375376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.375383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.375591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.375598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.375818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.375824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.376078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.376084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.376278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.376285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.376555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.376561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.376895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.376905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.377281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.377288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.377660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.377667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.378037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.378044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.678 [2024-06-10 11:52:31.378299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.678 [2024-06-10 11:52:31.378306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.678 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.378703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.378711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.378983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.378990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.379348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.379354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.379686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.379693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.379964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.379971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.380049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.380056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.380397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.380404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.380588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.380595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.380958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.380966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.381401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.381408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.381740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.381747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.382134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.382140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.382469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.382477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.382808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.382815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.383196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.383202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.383534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.383541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.383877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.383884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.384233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.384241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.384496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.384503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.384852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.384859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.385228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.385234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.385578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.385584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.385778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.385786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.386094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.386101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.386441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.386448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.386781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.386788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.387125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.387132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.387471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.387478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.387815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.387822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.388183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.388190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.388607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.388614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.388954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.388961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.389078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.389085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.389440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.389447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.389792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.389798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.390168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.390177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.390507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.390513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.390712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.679 [2024-06-10 11:52:31.390719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.679 qpair failed and we were unable to recover it. 00:44:02.679 [2024-06-10 11:52:31.391116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.391122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.391462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.391468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.391801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.391809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.392141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.392148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.392543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.392549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.392888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.392894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.393150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.393157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.393514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.393522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.393712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.393719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.394065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.394071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.394421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.394427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.394759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.394766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.395140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.395147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.395476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.395483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.395817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.395825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.396189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.396197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.396378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.396385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.396712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.396718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.396962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.396968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.397211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.397217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.397646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.397653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.397993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.398000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.398327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.398335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.398686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.398694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.399103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.399110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.399288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.399295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.399572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.399579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.399781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.399788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.400182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.400189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.400558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.400565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.400771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.400778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.401146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.401153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.401533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.401539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.401775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.401781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.402134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.402141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.402486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.402492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.402824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.402832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.680 qpair failed and we were unable to recover it. 00:44:02.680 [2024-06-10 11:52:31.403195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.680 [2024-06-10 11:52:31.403204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.403406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.403413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.403720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.403727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.403950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.403957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.404156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.404163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.404346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.404352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.404615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.404622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.404821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.404827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.405030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.405043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.405402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.405409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.405608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.405614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.406013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.406021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.406406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.406413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.406772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.406779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.407129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.407136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.407469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.407476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.407862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.407869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.408244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.408250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.408445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.408451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.408626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.408633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.408830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.408838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.409123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.409129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.409489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.409496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.409876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.409883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.410220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.410227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.410581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.410588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.410927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.410935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.411268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.411276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.411489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.411495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.411697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.411704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.412052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.412060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.412431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.412438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.412644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.412650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.412986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.412993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.413325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.413331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.413662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.413672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.414014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.414021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.414355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.414362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.414555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.681 [2024-06-10 11:52:31.414563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.681 qpair failed and we were unable to recover it. 00:44:02.681 [2024-06-10 11:52:31.414905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.414912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.415292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.415300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.415635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.415642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.416000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.416007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.416192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.416199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.416509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.416516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.416724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.416730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.416936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.416943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.417148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.417154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.417543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.417550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.417883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.417890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.418224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.682 [2024-06-10 11:52:31.418230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.682 qpair failed and we were unable to recover it. 00:44:02.682 [2024-06-10 11:52:31.418562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.418570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.418966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.418973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.419348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.419355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.419727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.419735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.420082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.420089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.420422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.420430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.420608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.420615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.421015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.421022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.421294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.421300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.421496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.421502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.421781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.421788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.422160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.422167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.422381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.422748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.422756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.423126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.423133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.423508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.423514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.423817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.423824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.424178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.424185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.424379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.424387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.424695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.424702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.425007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.425014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.425188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.425195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.425255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.425262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.425585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.425591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.425933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.425941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.426121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.426129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.683 [2024-06-10 11:52:31.426446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.683 [2024-06-10 11:52:31.426453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.683 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.426785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.426792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.427133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.427139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.427514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.427524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.427861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.427868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.428085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.428091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.428260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.428267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.428561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.428568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.428929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.428936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.429269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.429277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.429628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.429635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.429879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.429885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.430236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.430243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.430573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.430580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.430926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.430934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.431282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.431290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.431504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.431511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.431858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.431865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.432070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.432077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.432317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.432323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.432728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.432735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.433068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.433074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.433407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.433414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.433708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.433716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.434051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.434058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.434387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.434394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.434588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.434595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.434852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.434859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.435069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.435077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.435404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.435412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.435626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.435633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.435990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.435997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.436328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.436335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.436671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.436678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.437050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.437057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.437289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.437296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.437642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.437649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.438067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.438074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.684 [2024-06-10 11:52:31.438497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.684 [2024-06-10 11:52:31.438505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.684 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.438755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.438763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.439108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.439115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.439303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.439311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.439540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.439547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.439898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.439907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.440240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.440247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.440659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.440666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.440917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.440924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.441154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.441167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.441552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.441559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.442013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.442020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.442353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.442360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.442712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.442720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.443051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.443058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.443290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.443297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.443673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.443680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.443923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.443930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.444331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.444339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.444720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.444728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.444914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.444922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.445334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.445341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.445676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.445683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.445918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.445925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.446310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.446317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.446646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.446652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.447022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.447030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.447220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.447227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.447547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.447554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.447924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.447933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.448315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.448323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.448679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.448688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.449098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.449105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.449316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.449324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.449680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.449688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.450027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.450035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.450376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.450383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.450714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.450722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.685 [2024-06-10 11:52:31.450919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.685 [2024-06-10 11:52:31.450926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.685 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.451246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.451253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.451615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.451623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.451814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.451822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.452026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.452033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.452317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.452324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.452542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.452550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.452917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.452927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.453303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.453312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.453642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.453648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.453739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.453745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.453946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.453953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.454290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.454297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.454627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.454634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.454821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.454830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.455204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.455211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.455545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.455553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.455969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.455976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.456268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.456275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.456448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.456455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.456878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.456886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.457260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.457266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.457597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.457605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.457942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.457949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.458165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.458172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.458540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.458547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.458906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.458913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.459295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.459302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.459522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.459530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.459899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.459907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.460111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.460118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.460344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.460351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.460722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.460729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.461079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.461086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.461430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.461437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.461487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.461494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.461823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.461830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.462187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.686 [2024-06-10 11:52:31.462194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.686 qpair failed and we were unable to recover it. 00:44:02.686 [2024-06-10 11:52:31.462526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.462534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.462909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.462917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.463251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.463259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.463613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.463620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.463683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.463691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.464014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.464020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.464250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.464257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.464635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.464642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.464976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.464984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.465227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.465235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.465494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.465501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.465833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.465840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.466079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.466085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.466303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.466310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.466725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.466733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.467147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.467154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.467387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.467400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.467756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.467763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.468107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.468114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.468450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.468457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.468667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.468685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.469036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.469043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.469429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.469436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.469653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.469660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.469917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.469925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.470272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.470279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.470609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.470616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.470969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.470976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.471354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.471361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.471553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.471561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.471974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.471981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.687 [2024-06-10 11:52:31.472329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.687 [2024-06-10 11:52:31.472336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.687 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.472665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.472676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.473045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.473052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.473397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.473405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.473754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.473761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.474115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.474124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.474519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.474526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.474855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.474863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.475100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.475107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.475473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.475480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.475870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.475878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.476258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.476265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.476593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.476601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.476952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.476959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.477295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.477302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.477641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.477647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.477909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.477917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.478135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.478141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.478498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.478505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.478836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.478844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.479194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.479201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.479554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.479562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.479922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.479929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.480265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.480272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.480646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.480653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.480984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.480991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.481324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.481332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.481662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.481672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.481929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.481937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.482289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.482296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.482683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.482691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.483021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.483029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.483382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.483389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.483695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.483702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.483775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.483782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.484151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.484157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.484492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.484499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.484828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.484835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.688 qpair failed and we were unable to recover it. 00:44:02.688 [2024-06-10 11:52:31.484885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.688 [2024-06-10 11:52:31.484891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.485181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.485187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.485530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.485537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.485743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.485750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.486108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.486114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.486497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.486504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.486846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.486853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.487198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.487207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.487555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.487564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.487756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.487764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.488114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.488121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.488458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.488465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.488674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.488681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.489038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.489046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.489242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.489249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.489462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.489468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.489823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.489830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.490144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.490151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.490494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.490503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.490848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.490856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.491212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.491219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.491450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.491456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.491797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.491805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.492176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.492183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.492512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.492518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.492855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.492864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.493091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.493098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.493452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.493459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.493674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.493682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.494031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.494037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.494377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.494384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.494720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.494727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.494967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.494974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.495328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.495334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.495530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.495536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.495958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.495966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.496296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.496304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.496657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.496664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.689 qpair failed and we were unable to recover it. 00:44:02.689 [2024-06-10 11:52:31.497003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.689 [2024-06-10 11:52:31.497010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.497207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.497213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.497538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.497545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.497885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.497893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.498270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.498277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.498485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.498492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.498577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.498583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.498772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.498780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.499176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.499183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.499429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.499437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.499814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.499821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.500154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.500161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.500357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.500363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.500675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.500682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.501036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.501044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.501402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.501410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.501766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.501773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.502166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.502173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.502430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.502438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.502787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.502794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.503020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.503027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.503337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.503344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.503682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.503689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.504013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.504020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.504204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.504210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.504526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.504532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.504726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.504733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.504950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.504956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.505297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.505304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.505645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.505652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.505993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.506000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.506329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.506335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.506681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.506688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.507100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.507107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.507441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.507448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.507793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.507800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.508060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.508067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.508386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.508394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.508748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.690 [2024-06-10 11:52:31.508755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.690 qpair failed and we were unable to recover it. 00:44:02.690 [2024-06-10 11:52:31.509095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.509102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.509435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.509442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.509684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.509691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.510036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.510042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.510349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.510355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.510525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.510533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.510799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.510806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.511009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.511017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.511410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.511416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.511740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.511746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.511919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.511927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.512323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.512330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.512752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.512758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.513142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.513149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.513477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.513483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.513679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.513686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.514039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.514045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.514396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.514404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.514618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.514625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.514828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.514834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.514998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.515005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.515320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.515327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.515681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.515687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.516055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.516062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.516290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.516297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.516644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.516650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.516858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.516865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.517255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.517261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.517447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.517454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.517767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.517775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.518138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.518144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.518477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.518483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.518771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.518778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.519118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.519124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.519504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.519511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.519855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.519862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.520183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.520189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.520396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.691 [2024-06-10 11:52:31.520403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.691 qpair failed and we were unable to recover it. 00:44:02.691 [2024-06-10 11:52:31.520807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.520814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.521015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.521022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.521385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.521392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.521760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.521767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.521828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.521833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.522059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.522065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.522448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.522454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.522709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.522716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.522960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.522966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.523294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.523301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.523503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.523510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.523734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.523740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.524058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.524066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.524445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.524452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.524790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.524797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.525176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.525183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.525379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.525385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.525738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.525745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:02.692 [2024-06-10 11:52:31.526114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.526123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:44:02.692 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:02.692 [2024-06-10 11:52:31.526297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.526305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:02.692 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.692 [2024-06-10 11:52:31.526533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.526542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.526899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.526907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.527241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.527248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.527428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.527436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.527662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.527671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.528090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.528096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.528437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.528444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.528631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.528638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.528950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.528958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.529289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.529297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.692 [2024-06-10 11:52:31.529632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.692 [2024-06-10 11:52:31.529639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.692 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.529812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.529819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.530223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.530231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.530440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.530447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.530824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.530831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.531147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.531154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.531230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.531236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.531593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.531601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.531946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.531953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.532332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.532339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.532629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.532636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.532839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.532846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.533166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.533172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.533419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.533427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.533783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.533791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.534224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.534232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.534566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.534573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.534824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.534831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.535140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.535147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.535506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.535513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.535867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.535876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.536207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.536214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.536505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.536512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.536863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.536871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.537058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.537066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.537316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.537322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.537539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.537546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.537853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.537860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.538205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.538213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.538584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.538592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.538774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.538781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.539123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.539130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.539460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.539468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.539826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.539833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.540134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.540142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.540339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.540346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.540552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.540560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.540888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.540895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.693 [2024-06-10 11:52:31.541239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.693 [2024-06-10 11:52:31.541247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.693 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.541576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.541583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.541928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.541935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.542131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.542139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.542437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.542445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.542742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.542749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.543135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.543142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.543296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.543302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.543614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.543621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.543982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.543989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.544318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.544326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.544520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.544527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.544747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.544756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.545028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.545035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.545520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.545535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.545883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.545891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.546129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.546138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.546493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.546501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.546864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.546872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.547295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.547303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.547500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.547507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.547752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.547760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.548122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.548132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.548327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.548333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.548687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.548694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.549002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.549010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.549354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.549361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.549697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.549704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.550062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.550069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.550401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.550407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.550738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.550746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.551092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.551099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.551289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.551296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.551659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.551666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.552044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.552051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.552395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.552403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.552790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.552798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.552994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.553002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.553346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.694 [2024-06-10 11:52:31.553352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.694 qpair failed and we were unable to recover it. 00:44:02.694 [2024-06-10 11:52:31.553689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.553696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.553948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.553957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.554254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.554262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.554514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.554521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.554869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.554876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.555219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.555227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.555557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.555565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.555944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.555952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.556342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.556350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.556698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.556706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.556975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.556982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.557333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.557339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.557597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.557604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.557656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.557663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.557995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.558003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.558333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.558339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.558526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.558534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.558894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.558902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.559234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.559242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.559574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.559581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.559951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.559958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.560297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.560305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.560641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.560647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.560855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.560864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.561235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.561242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.561456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.561464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.561799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.561806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.562144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.562150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.562480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.562487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.562819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.562826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.563161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.563169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.563497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.563504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 [2024-06-10 11:52:31.563760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.563767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:02.695 [2024-06-10 11:52:31.564119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.564128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:02.695 [2024-06-10 11:52:31.564505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.564514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.695 [2024-06-10 11:52:31.564876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.695 [2024-06-10 11:52:31.564887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.695 qpair failed and we were unable to recover it. 00:44:02.695 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.695 [2024-06-10 11:52:31.565248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.565256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.565675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.565684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.565735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.565742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.565956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.565964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.566164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.566171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.566502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.566508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.566693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.566701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.567073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.567080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.567487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.567494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.567820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.567827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.568169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.568175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.568421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.568428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.568691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.568700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.568914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.568921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.569269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.569275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.569596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.569603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.569951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.569958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.570258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.570265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.570453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.570460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.570838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.570845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.571188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.571195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.571536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.571542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.571890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.571898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.572277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.572284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.572633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.572639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.572971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.572977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.573311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.573318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.573652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.573659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.574001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.574008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.574254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.574261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.574521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.574527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.574867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.574875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.575240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.575247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.575599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.575606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.576033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.576040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.696 qpair failed and we were unable to recover it. 00:44:02.696 [2024-06-10 11:52:31.576220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.696 [2024-06-10 11:52:31.576227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.576641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.576648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.576990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.576996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.577335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.577341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.577394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.577401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.577589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.577596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.577843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.577851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.578275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.578281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.578618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.578624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.578818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.578825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.579180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.579187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.579558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.579565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.579930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.579937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.580318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.580325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.580538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.580546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.580606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.580613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.580845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.580852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.581233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.581243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.581436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.581444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 Malloc0 00:44:02.697 [2024-06-10 11:52:31.581842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.581850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.582048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.582056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.697 [2024-06-10 11:52:31.582307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.582314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:44:02.697 [2024-06-10 11:52:31.582707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.582715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.697 [2024-06-10 11:52:31.583080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.583088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.697 [2024-06-10 11:52:31.583308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.583316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.583791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.583799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.584121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.584128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.584493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.584500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.584741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.584749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.585116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.585123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.585546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.585553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.585973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.585981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.586344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.586352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.586747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.697 [2024-06-10 11:52:31.586762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.697 qpair failed and we were unable to recover it. 00:44:02.697 [2024-06-10 11:52:31.587143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.587151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.587499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.587507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.587714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.587722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.588082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.588089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.588148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.588475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.588483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.588798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.588805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.588867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.588875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b9[2024-06-10 11:52:31.588861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:02.698 0 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.589213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.589221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.589284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.589291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.589503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.589510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.589868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.589876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.590229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.590236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.590598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.590606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.590851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.590859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.591176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.591184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.591601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.591609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.591962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.591971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.592318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.592326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.592664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.592674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.592929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.592936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.593291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.593300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.593649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.593656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.594047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.594055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.594408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.594415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.594630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.594637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.594856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.594864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.595107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.595114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.595468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.595475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.595833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.595840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.596193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.596199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.596536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.596542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.596939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.596946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.597201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.597207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.597573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.597580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 [2024-06-10 11:52:31.597944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.597952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.698 [2024-06-10 11:52:31.598308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.698 [2024-06-10 11:52:31.598315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.698 qpair failed and we were unable to recover it. 00:44:02.698 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:02.698 [2024-06-10 11:52:31.598676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.598684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.699 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.699 [2024-06-10 11:52:31.599030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.599037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.599294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.599301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.599524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.599530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.599887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.599894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.600126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.600132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.600384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.600392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.600755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.600762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.601002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.601008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.601270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.601278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.601546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.601553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.601744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.601750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.602144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.602150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.602521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.602527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.602864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.602871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.603029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.603035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.603218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.603226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.603542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.603549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.603807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.603814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.604179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.604186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.604536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.604543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.604892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.604899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.605111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.605117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.605482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.605488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.605690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.605697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.606171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.606177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.606410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.606417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.606632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.606639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.607041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.607048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.607254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.607261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.607622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.607628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.608028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.608036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.608217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.608223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.608612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.608619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.608968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.608975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.609363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.609369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 [2024-06-10 11:52:31.609716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.609722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.699 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.699 [2024-06-10 11:52:31.610077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.699 [2024-06-10 11:52:31.610084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.699 qpair failed and we were unable to recover it. 00:44:02.700 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:02.700 [2024-06-10 11:52:31.610493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.610500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.700 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.700 [2024-06-10 11:52:31.610888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.610895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.611252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.611259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.611467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.611474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.611582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.611589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.611921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.611928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.612264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.612270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.612597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.612604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.612964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.612970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.613357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.613363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.613615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.613622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.613972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.613979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.614380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.614387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.700 [2024-06-10 11:52:31.614589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.700 [2024-06-10 11:52:31.614597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.700 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.614962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.614970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.615366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.615374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.615747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.615754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.616014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.616020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.616369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.616376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.616674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.616681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.616916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.616922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.617281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.617287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.617472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.617478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.617879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.617886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.618224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.618230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.618568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.618574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.618927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.618934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.619128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.619134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.619527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.619534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.619874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.619881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.620092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.620098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.620413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.620420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.620638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.620646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.620964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.620971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.621325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.621332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.621693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.621700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.963 [2024-06-10 11:52:31.622058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.622065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.622234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.622241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:02.963 [2024-06-10 11:52:31.622560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.622567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.963 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.963 [2024-06-10 11:52:31.622919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.622929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.623274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.623281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.963 qpair failed and we were unable to recover it. 00:44:02.963 [2024-06-10 11:52:31.623491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.963 [2024-06-10 11:52:31.623498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.623882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.623889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.624123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.624129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.624408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.624415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.624771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.624777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.625124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.625131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.625463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.625469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.625783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.625789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.626124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.626131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.626334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.626341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.626649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.626656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.626961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.626967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.627302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.627309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.627639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.627645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.628048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.628055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.628238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.628246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.628560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.628566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.628916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:44:02.964 [2024-06-10 11:52:31.628922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6224000b90 with addr=10.0.0.2, port=4420 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.629114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:02.964 [2024-06-10 11:52:31.639689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.639765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.639779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.639784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.639789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.639804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.964 11:52:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2520192 00:44:02.964 [2024-06-10 11:52:31.649629] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.649695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.649708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.649713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.649717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.649728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.659674] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.659729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.659741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.659746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.659750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.659761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.669637] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.669704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.669715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.669720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.669724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.669735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.679650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.679717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.679732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.679737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.679741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.679752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.689687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.964 [2024-06-10 11:52:31.689743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.964 [2024-06-10 11:52:31.689754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.964 [2024-06-10 11:52:31.689759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.964 [2024-06-10 11:52:31.689763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.964 [2024-06-10 11:52:31.689774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.964 qpair failed and we were unable to recover it. 00:44:02.964 [2024-06-10 11:52:31.699717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.699771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.699783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.699787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.699791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.699802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.709731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.709786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.709798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.709803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.709807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.709817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.719746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.719859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.719871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.719876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.719883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.719894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.729798] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.729856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.729867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.729872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.729876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.729886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.739681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.739738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.739749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.739754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.739758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.739768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.749836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.749891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.749902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.749907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.749911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.749921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.759881] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.759941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.759952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.759957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.759961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.759971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.769925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.769985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.769996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.770001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.770005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.770015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.779980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.780049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.780061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.780065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.780070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.780080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.789955] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.790014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.790025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.790030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.790034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.790044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.800122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.800198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.800209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.800214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.800218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.800228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.810163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.810216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.810228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.810235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.810239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.810250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.820093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.820147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.820158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.820163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.820167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.965 [2024-06-10 11:52:31.820177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.965 qpair failed and we were unable to recover it. 00:44:02.965 [2024-06-10 11:52:31.830169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.965 [2024-06-10 11:52:31.830228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.965 [2024-06-10 11:52:31.830239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.965 [2024-06-10 11:52:31.830244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.965 [2024-06-10 11:52:31.830248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.830258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.840109] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.840169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.840180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.840185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.840190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.840199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.850149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.850204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.850216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.850220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.850225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.850234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.860140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.860191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.860203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.860207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.860211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.860221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.870189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.870244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.870254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.870259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.870263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.870273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.880242] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.880302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.880313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.880318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.880322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.880332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.890255] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.890311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.890322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.890327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.890331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.890341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.900272] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.900349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.900367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.900377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.900381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.900395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.910346] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.910412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.910424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.910429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.910433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.910444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.920343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.920409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.920427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.920433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.920438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.920451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:02.966 [2024-06-10 11:52:31.930251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:02.966 [2024-06-10 11:52:31.930313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:02.966 [2024-06-10 11:52:31.930332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:02.966 [2024-06-10 11:52:31.930338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:02.966 [2024-06-10 11:52:31.930342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:02.966 [2024-06-10 11:52:31.930356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:02.966 qpair failed and we were unable to recover it. 00:44:03.239 [2024-06-10 11:52:31.940389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.239 [2024-06-10 11:52:31.940443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.239 [2024-06-10 11:52:31.940457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.239 [2024-06-10 11:52:31.940463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.239 [2024-06-10 11:52:31.940467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.239 [2024-06-10 11:52:31.940478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.239 qpair failed and we were unable to recover it. 00:44:03.239 [2024-06-10 11:52:31.950281] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.239 [2024-06-10 11:52:31.950335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.239 [2024-06-10 11:52:31.950347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.239 [2024-06-10 11:52:31.950352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.239 [2024-06-10 11:52:31.950356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.239 [2024-06-10 11:52:31.950367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.239 qpair failed and we were unable to recover it. 00:44:03.239 [2024-06-10 11:52:31.960414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.239 [2024-06-10 11:52:31.960478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.239 [2024-06-10 11:52:31.960490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.239 [2024-06-10 11:52:31.960494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.239 [2024-06-10 11:52:31.960499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.239 [2024-06-10 11:52:31.960509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.239 qpair failed and we were unable to recover it. 00:44:03.239 [2024-06-10 11:52:31.970436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.239 [2024-06-10 11:52:31.970496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.239 [2024-06-10 11:52:31.970508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.239 [2024-06-10 11:52:31.970512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.239 [2024-06-10 11:52:31.970516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.239 [2024-06-10 11:52:31.970527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.239 qpair failed and we were unable to recover it. 00:44:03.239 [2024-06-10 11:52:31.980448] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.239 [2024-06-10 11:52:31.980502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.239 [2024-06-10 11:52:31.980513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.239 [2024-06-10 11:52:31.980518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:31.980522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:31.980532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:31.990492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:31.990548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:31.990562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:31.990567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:31.990571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:31.990581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.000536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.000597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.000609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.000614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.000618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.000627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.010533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.010602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.010613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.010618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.010622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.010632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.020577] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.020630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.020642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.020647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.020651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.020662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.030610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.030672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.030683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.030688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.030693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.030705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.040628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.040688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.040700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.040705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.040709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.040719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.050668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.050725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.050736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.050741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.050745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.050755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.060689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.060754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.060765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.060770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.060774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.060784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.070699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.070782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.070793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.070798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.070802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.070813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.080746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.080817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.080834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.080839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.080843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.080853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.090782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.090837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.090848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.090853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.090857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.090867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.100808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.100864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.100876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.100880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.100884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.100894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.110862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.240 [2024-06-10 11:52:32.110923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.240 [2024-06-10 11:52:32.110934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.240 [2024-06-10 11:52:32.110939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.240 [2024-06-10 11:52:32.110943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.240 [2024-06-10 11:52:32.110953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.240 qpair failed and we were unable to recover it. 00:44:03.240 [2024-06-10 11:52:32.120895] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.120954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.120965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.120970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.120977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.120987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.130927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.130980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.130992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.130996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.131000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.131010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.140924] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.140980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.140991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.140996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.141000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.141010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.150950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.151006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.151017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.151022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.151026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.151035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.160865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.160927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.160938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.160942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.160946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.160956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.171016] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.171077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.171088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.171093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.171097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.171107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.181030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.181084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.181095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.181100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.181104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.181114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.191071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.191131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.191142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.191147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.191151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.191161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.241 [2024-06-10 11:52:32.201111] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.241 [2024-06-10 11:52:32.201170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.241 [2024-06-10 11:52:32.201181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.241 [2024-06-10 11:52:32.201186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.241 [2024-06-10 11:52:32.201190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.241 [2024-06-10 11:52:32.201200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.241 qpair failed and we were unable to recover it. 00:44:03.503 [2024-06-10 11:52:32.211138] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.503 [2024-06-10 11:52:32.211193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.503 [2024-06-10 11:52:32.211204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.503 [2024-06-10 11:52:32.211208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.211215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.211225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.221134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.221197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.221207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.221212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.221216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.221225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.231189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.231243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.231254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.231259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.231263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.231272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.241202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.241265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.241276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.241281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.241285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.241294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.251257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.251353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.251364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.251369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.251373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.251382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.261269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.261331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.261349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.261354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.261359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.261372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.271335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.271400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.271413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.271418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.271422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.271433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.281329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.281395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.281406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.281411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.281415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.281426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.291353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.291414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.291425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.291430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.291434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.291444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.301380] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.301457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.301468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.301477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.301481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.301492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.311446] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.311511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.311529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.311535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.311540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.311553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.321457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.321519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.321532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.321537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.321541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.321551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.504 [2024-06-10 11:52:32.331482] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.504 [2024-06-10 11:52:32.331540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.504 [2024-06-10 11:52:32.331551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.504 [2024-06-10 11:52:32.331556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.504 [2024-06-10 11:52:32.331560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.504 [2024-06-10 11:52:32.331570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.504 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.341491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.341543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.341555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.341559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.341563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.341573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.351522] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.351583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.351595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.351599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.351604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.351613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.361527] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.361587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.361599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.361604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.361608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.361618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.371577] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.371653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.371664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.371672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.371677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.371687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.381593] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.381722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.381734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.381739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.381744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.381754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.391625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.391693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.391707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.391712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.391716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.391726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.401640] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.401706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.401717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.401722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.401726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.401736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.411694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.411753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.411764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.411768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.411773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.411782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.421709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.421766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.421777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.421782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.421786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.421795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.431756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.431838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.431849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.431854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.431858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.431871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.441755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.441820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.441831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.441836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.441840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.441850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.451804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.451859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.451870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.451875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.451879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.451888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.505 [2024-06-10 11:52:32.461817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.505 [2024-06-10 11:52:32.461877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.505 [2024-06-10 11:52:32.461888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.505 [2024-06-10 11:52:32.461893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.505 [2024-06-10 11:52:32.461897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.505 [2024-06-10 11:52:32.461907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.505 qpair failed and we were unable to recover it. 00:44:03.506 [2024-06-10 11:52:32.471851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.506 [2024-06-10 11:52:32.471911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.506 [2024-06-10 11:52:32.471923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.506 [2024-06-10 11:52:32.471928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.506 [2024-06-10 11:52:32.471932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.506 [2024-06-10 11:52:32.471942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.506 qpair failed and we were unable to recover it. 00:44:03.769 [2024-06-10 11:52:32.481889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.769 [2024-06-10 11:52:32.481948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.769 [2024-06-10 11:52:32.481963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.769 [2024-06-10 11:52:32.481969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.481975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.481986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.491904] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.491962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.491973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.491978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.491983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.491993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.501941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.501998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.502010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.502014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.502018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.502028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.511977] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.512034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.512045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.512049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.512053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.512063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.521993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.522050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.522061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.522066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.522070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.522082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.532007] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.532057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.532068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.532072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.532076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.532086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.542041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.542136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.542147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.542152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.542156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.542166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.552068] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.552129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.552140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.552145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.552149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.552159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.562104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.562162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.562174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.562178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.562182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.562192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.572156] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.572220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.572231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.572236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.572240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.572249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.582136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.582189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.582200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.582205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.582210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.582220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.592198] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.592256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.592268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.592272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.592277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.592286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.602193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.602256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.602268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.602273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.770 [2024-06-10 11:52:32.602277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.770 [2024-06-10 11:52:32.602289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.770 qpair failed and we were unable to recover it. 00:44:03.770 [2024-06-10 11:52:32.612225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.770 [2024-06-10 11:52:32.612299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.770 [2024-06-10 11:52:32.612311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.770 [2024-06-10 11:52:32.612315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.612322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.612332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.622265] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.622367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.622379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.622383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.622387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.622397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.632288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.632350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.632368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.632374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.632379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.632392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.642307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.642378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.642390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.642395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.642400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.642411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.652340] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.652395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.652413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.652418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.652423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.652436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.662374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.662429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.662442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.662447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.662451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.662462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.672400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.672460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.672472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.672477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.672481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.672492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.682405] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.682480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.682492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.682496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.682501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.682511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.692424] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.692507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.692519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.692524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.692528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.692538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.702499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.702550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.702561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.702568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.702573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.702583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.712523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.712592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.712603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.712608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.712612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.712622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.722414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.722474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.771 [2024-06-10 11:52:32.722486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.771 [2024-06-10 11:52:32.722491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.771 [2024-06-10 11:52:32.722495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.771 [2024-06-10 11:52:32.722505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.771 qpair failed and we were unable to recover it. 00:44:03.771 [2024-06-10 11:52:32.732440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:03.771 [2024-06-10 11:52:32.732496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:03.772 [2024-06-10 11:52:32.732508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:03.772 [2024-06-10 11:52:32.732512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:03.772 [2024-06-10 11:52:32.732517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:03.772 [2024-06-10 11:52:32.732526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:03.772 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.742536] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.742593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.742604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.742609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.742613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.742623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.752619] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.752681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.752694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.752699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.752703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.752713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.762641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.762710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.762722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.762727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.762731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.762741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.772661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.772722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.772734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.772738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.772742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.772753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.782683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.782743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.782754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.782759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.782763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.782774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.792709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.792766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.792781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.792786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.792790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.792800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.802756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.802819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.802830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.802835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.802839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.802849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.812783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.812840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.812851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.812856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.812860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.812870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.822827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.822879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.822890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.822895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.822899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.035 [2024-06-10 11:52:32.822909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.035 qpair failed and we were unable to recover it. 00:44:04.035 [2024-06-10 11:52:32.832830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.035 [2024-06-10 11:52:32.832888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.035 [2024-06-10 11:52:32.832899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.035 [2024-06-10 11:52:32.832904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.035 [2024-06-10 11:52:32.832908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.832921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.842872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.842931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.842942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.842947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.842951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.842961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.852801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.852857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.852870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.852874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.852878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.852889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.862892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.862951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.862962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.862967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.862971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.862981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.872966] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.873022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.873033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.873038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.873042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.873052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.882979] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.883043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.883057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.883062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.883066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.883076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.892998] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.893087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.893098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.893103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.893107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.893117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.903041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.903092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.903103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.903108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.903112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.903122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.913055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.913112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.913123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.913128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.913132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.913142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.923093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.923154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.923165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.923169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.923174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.923186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.932988] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.933045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.933056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.933060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.933064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.933074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.943104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.943160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.943170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.943175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.943179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.943189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.953158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.953218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.953229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.953234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.953238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.953247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.963236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.963299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.036 [2024-06-10 11:52:32.963310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.036 [2024-06-10 11:52:32.963315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.036 [2024-06-10 11:52:32.963319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.036 [2024-06-10 11:52:32.963329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.036 qpair failed and we were unable to recover it. 00:44:04.036 [2024-06-10 11:52:32.973204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.036 [2024-06-10 11:52:32.973257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.037 [2024-06-10 11:52:32.973271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.037 [2024-06-10 11:52:32.973276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.037 [2024-06-10 11:52:32.973280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.037 [2024-06-10 11:52:32.973290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.037 qpair failed and we were unable to recover it. 00:44:04.037 [2024-06-10 11:52:32.983230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.037 [2024-06-10 11:52:32.983283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.037 [2024-06-10 11:52:32.983295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.037 [2024-06-10 11:52:32.983300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.037 [2024-06-10 11:52:32.983304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.037 [2024-06-10 11:52:32.983315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.037 qpair failed and we were unable to recover it. 00:44:04.037 [2024-06-10 11:52:32.993277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.037 [2024-06-10 11:52:32.993332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.037 [2024-06-10 11:52:32.993343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.037 [2024-06-10 11:52:32.993348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.037 [2024-06-10 11:52:32.993352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.037 [2024-06-10 11:52:32.993362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.037 qpair failed and we were unable to recover it. 00:44:04.037 [2024-06-10 11:52:33.003309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.037 [2024-06-10 11:52:33.003375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.037 [2024-06-10 11:52:33.003386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.037 [2024-06-10 11:52:33.003390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.037 [2024-06-10 11:52:33.003395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.037 [2024-06-10 11:52:33.003405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.037 qpair failed and we were unable to recover it. 00:44:04.299 [2024-06-10 11:52:33.013370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.299 [2024-06-10 11:52:33.013425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.299 [2024-06-10 11:52:33.013436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.299 [2024-06-10 11:52:33.013441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.299 [2024-06-10 11:52:33.013448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.299 [2024-06-10 11:52:33.013458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.299 qpair failed and we were unable to recover it. 00:44:04.299 [2024-06-10 11:52:33.023348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.299 [2024-06-10 11:52:33.023427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.299 [2024-06-10 11:52:33.023438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.299 [2024-06-10 11:52:33.023443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.299 [2024-06-10 11:52:33.023447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.299 [2024-06-10 11:52:33.023457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.299 qpair failed and we were unable to recover it. 00:44:04.299 [2024-06-10 11:52:33.033436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.299 [2024-06-10 11:52:33.033490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.299 [2024-06-10 11:52:33.033501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.299 [2024-06-10 11:52:33.033506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.299 [2024-06-10 11:52:33.033511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.299 [2024-06-10 11:52:33.033520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.299 qpair failed and we were unable to recover it. 00:44:04.299 [2024-06-10 11:52:33.043412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.043473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.043484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.043489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.043493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.043503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.053447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.053529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.053541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.053545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.053550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.053559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.063473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.063537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.063548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.063552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.063557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.063567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.073519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.073585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.073596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.073601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.073606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.073616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.083541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.083604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.083616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.083621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.083625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.083635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.093576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.093625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.093637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.093642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.093646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.093655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.103618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.103705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.103716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.103725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.103729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.103739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.113634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.113693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.113705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.113709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.113714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.113724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.123651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.123715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.123726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.123731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.123735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.123745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.133673] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.133822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.133834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.133838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.133843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.133853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.143718] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.143774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.143785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.143790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.143794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.143804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.153739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.153795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.153806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.153811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.153815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.153825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.163786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.163852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.163863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.300 [2024-06-10 11:52:33.163867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.300 [2024-06-10 11:52:33.163871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.300 [2024-06-10 11:52:33.163881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.300 qpair failed and we were unable to recover it. 00:44:04.300 [2024-06-10 11:52:33.173768] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.300 [2024-06-10 11:52:33.173828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.300 [2024-06-10 11:52:33.173839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.173844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.173848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.173858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.183807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.183863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.183874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.183879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.183883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.183893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.193847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.193924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.193935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.193943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.193947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.193956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.203920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.203980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.203992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.203996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.204000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.204010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.213798] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.213854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.213865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.213869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.213874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.213884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.223905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.223986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.223998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.224002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.224007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.224016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.233927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.233987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.233998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.234002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.234006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.234016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.243987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.244045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.244056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.244061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.244065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.244074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.254028] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.254080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.254091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.254096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.254100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.254110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.301 [2024-06-10 11:52:33.264032] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.301 [2024-06-10 11:52:33.264086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.301 [2024-06-10 11:52:33.264097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.301 [2024-06-10 11:52:33.264102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.301 [2024-06-10 11:52:33.264106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.301 [2024-06-10 11:52:33.264116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.301 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.274043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.274097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.274108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.274113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.274117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.274127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.284150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.284220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.284234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.284239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.284243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.284254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.294111] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.294166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.294178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.294183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.294187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.294198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.304141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.304195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.304207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.304212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.304216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.304225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.314163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.314258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.314269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.314273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.314278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.314287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.324202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.324259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.324270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.324275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.324279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.324292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.334233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.334289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.334301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.334305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.334309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.334319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.344260] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.344317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.344329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.344334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.344339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.344349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.354291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.354345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.354356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.354361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.354365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.354375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.364333] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.364393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.364405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.364410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.364414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.364423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.374341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.374398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.374412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.374417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.374421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.374431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.384388] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.384446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.384457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.384462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.384466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.384476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.394414] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.394474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.394485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.394490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.394494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.394504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.404433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.404535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.404554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.404560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.404565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.404579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.414464] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.414519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.414532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.414536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.414544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.414555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.424502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.424553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.424565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.424570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.424574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.424584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.434523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.434578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.434589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.434594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.434598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.434608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.444570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.444679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.444691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.444696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.444700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.444710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.454580] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.454664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.454678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.454683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.454688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.454698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.464607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.464661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.464675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.464680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.464684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.464694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.474703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.474804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.474814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.474819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.474823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.564 [2024-06-10 11:52:33.474833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.564 qpair failed and we were unable to recover it. 00:44:04.564 [2024-06-10 11:52:33.484687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.564 [2024-06-10 11:52:33.484749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.564 [2024-06-10 11:52:33.484762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.564 [2024-06-10 11:52:33.484768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.564 [2024-06-10 11:52:33.484773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.565 [2024-06-10 11:52:33.484784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.565 qpair failed and we were unable to recover it. 00:44:04.565 [2024-06-10 11:52:33.494706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.565 [2024-06-10 11:52:33.494759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.565 [2024-06-10 11:52:33.494771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.565 [2024-06-10 11:52:33.494777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.565 [2024-06-10 11:52:33.494782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.565 [2024-06-10 11:52:33.494793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.565 qpair failed and we were unable to recover it. 00:44:04.565 [2024-06-10 11:52:33.504715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.565 [2024-06-10 11:52:33.504786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.565 [2024-06-10 11:52:33.504797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.565 [2024-06-10 11:52:33.504808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.565 [2024-06-10 11:52:33.504812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.565 [2024-06-10 11:52:33.504822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.565 qpair failed and we were unable to recover it. 00:44:04.565 [2024-06-10 11:52:33.514751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.565 [2024-06-10 11:52:33.514808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.565 [2024-06-10 11:52:33.514819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.565 [2024-06-10 11:52:33.514823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.565 [2024-06-10 11:52:33.514828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.565 [2024-06-10 11:52:33.514838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.565 qpair failed and we were unable to recover it. 00:44:04.565 [2024-06-10 11:52:33.524801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.565 [2024-06-10 11:52:33.524861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.565 [2024-06-10 11:52:33.524872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.565 [2024-06-10 11:52:33.524877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.565 [2024-06-10 11:52:33.524881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.565 [2024-06-10 11:52:33.524891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.565 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.534775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.534833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.534844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.534848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.534853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.534862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.544813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.544875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.544886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.544891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.544895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.544905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.554908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.554968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.554979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.554984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.554988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.554998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.564858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.564924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.564935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.564939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.564944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.564954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.574936] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.574993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.575004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.575009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.575013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.575023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.584939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.584995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.585006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.585011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.585015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.585025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.594861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.594922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.594933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.594941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.594945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.594955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.605005] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.605069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.605080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.605084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.605089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.605098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.615011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.615063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.615074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.615079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.615083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.615093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.625067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.625121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.625132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.625137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.625141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.625150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.635082] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.635138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.635149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.635153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.635157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.635167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.645160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.645223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.645234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.645239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.645243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.645252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.655143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.655196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.655207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.655211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.655216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.655225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.665171] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.665227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.665238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.665243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.665247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.665257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.675121] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.675182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.675196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.675202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.675206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.675216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.685225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.685312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.685327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.685332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.685336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.685347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.695244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.827 [2024-06-10 11:52:33.695297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.827 [2024-06-10 11:52:33.695308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.827 [2024-06-10 11:52:33.695313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.827 [2024-06-10 11:52:33.695317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.827 [2024-06-10 11:52:33.695327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.827 qpair failed and we were unable to recover it. 00:44:04.827 [2024-06-10 11:52:33.705287] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.705369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.705380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.705385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.705389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.705399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.715311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.715367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.715379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.715383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.715387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.715397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.725332] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.725391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.725402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.725406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.725411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.725423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.735379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.735434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.735445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.735450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.735454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.735463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.745392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.745449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.745460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.745465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.745469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.745479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.755409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.755465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.755476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.755480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.755485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.755494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.765443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.765535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.765546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.765551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.765555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.765565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.775453] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.775507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.775521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.775526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.775530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.775540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.785477] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.785531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.785543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.785547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.785551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.785561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:04.828 [2024-06-10 11:52:33.795597] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:04.828 [2024-06-10 11:52:33.795696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:04.828 [2024-06-10 11:52:33.795708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:04.828 [2024-06-10 11:52:33.795713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:04.828 [2024-06-10 11:52:33.795717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:04.828 [2024-06-10 11:52:33.795727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:04.828 qpair failed and we were unable to recover it. 00:44:05.089 [2024-06-10 11:52:33.805661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.089 [2024-06-10 11:52:33.805742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.089 [2024-06-10 11:52:33.805753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.089 [2024-06-10 11:52:33.805758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.089 [2024-06-10 11:52:33.805762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.089 [2024-06-10 11:52:33.805772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.089 qpair failed and we were unable to recover it. 00:44:05.089 [2024-06-10 11:52:33.815498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.089 [2024-06-10 11:52:33.815554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.089 [2024-06-10 11:52:33.815565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.089 [2024-06-10 11:52:33.815570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.089 [2024-06-10 11:52:33.815577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.089 [2024-06-10 11:52:33.815587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.089 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.825647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.825705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.825716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.825721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.825725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.825736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.835685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.835743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.835754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.835759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.835763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.835773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.845666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.845724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.845735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.845739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.845744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.845754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.855663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.855720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.855731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.855736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.855740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.855750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.865730] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.865789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.865799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.865804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.865808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.865818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.875758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.875816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.875827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.875832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.875836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.875846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.885778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.885840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.885851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.885856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.885860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.885870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.895766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.895819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.895830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.895834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.895838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.895848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.905807] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.905891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.905903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.905908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.905915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.905928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.915876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.915930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.915942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.915947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.915951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.915961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.925915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.925975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.925986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.925991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.925995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.926005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.935891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.935944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.935955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.935959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.935964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.935974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.945963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.946017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.090 [2024-06-10 11:52:33.946028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.090 [2024-06-10 11:52:33.946034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.090 [2024-06-10 11:52:33.946038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.090 [2024-06-10 11:52:33.946048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.090 qpair failed and we were unable to recover it. 00:44:05.090 [2024-06-10 11:52:33.956000] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.090 [2024-06-10 11:52:33.956057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:33.956069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:33.956073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:33.956077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:33.956088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:33.966011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:33.966079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:33.966090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:33.966095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:33.966099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:33.966109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:33.975922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:33.975987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:33.975998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:33.976003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:33.976008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:33.976018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:33.985950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:33.986004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:33.986015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:33.986020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:33.986024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:33.986034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:33.996089] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:33.996144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:33.996155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:33.996162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:33.996167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:33.996177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.006127] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.006187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.006198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.006203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.006207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.006217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.016175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.016228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.016239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.016243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.016247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.016257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.026063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.026125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.026136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.026141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.026145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.026155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.036315] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.036374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.036385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.036390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.036394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.036404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.046221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.046278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.046290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.046294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.046299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.046308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.091 [2024-06-10 11:52:34.056257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.091 [2024-06-10 11:52:34.056353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.091 [2024-06-10 11:52:34.056364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.091 [2024-06-10 11:52:34.056369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.091 [2024-06-10 11:52:34.056373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.091 [2024-06-10 11:52:34.056382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.091 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.066323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.066415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.066427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.066431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.066436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.066446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.076257] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.076314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.076325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.076330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.076334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.076344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.086360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.086469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.086483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.086488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.086492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.086502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.096367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.096426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.096437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.096442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.096446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.096456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.106407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.106461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.106472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.106477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.106481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.106491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.116485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.116541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.116552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.116557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.116561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.116571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.126366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.126422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.126434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.126438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.126443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.126455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.136533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.136589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.136601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.136605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.136609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.136619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.146524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.146581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.146592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.146597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.146601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.146611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.156557] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.156613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.156624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.156629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.156633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.156643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.166653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.354 [2024-06-10 11:52:34.166763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.354 [2024-06-10 11:52:34.166774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.354 [2024-06-10 11:52:34.166779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.354 [2024-06-10 11:52:34.166783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.354 [2024-06-10 11:52:34.166793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.354 qpair failed and we were unable to recover it. 00:44:05.354 [2024-06-10 11:52:34.176606] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.176660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.176678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.176683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.176687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.176698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.186525] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.186586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.186597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.186602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.186606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.186616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.196684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.196742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.196753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.196758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.196762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.196772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.206659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.206721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.206732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.206737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.206741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.206751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.216723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.216819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.216830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.216834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.216839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.216854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.226754] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.226809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.226819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.226824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.226828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.226838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.236778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.236869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.236880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.236885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.236889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.236899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.246845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.246925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.246936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.246941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.246945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.246955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.256844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.256900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.256910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.256915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.256919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.256929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.266862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.266922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.266932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.266937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.266941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.266951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.276908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.276965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.276975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.276980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.276984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.276994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.286906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.286970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.355 [2024-06-10 11:52:34.286980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.355 [2024-06-10 11:52:34.286985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.355 [2024-06-10 11:52:34.286989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.355 [2024-06-10 11:52:34.286999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.355 qpair failed and we were unable to recover it. 00:44:05.355 [2024-06-10 11:52:34.296941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.355 [2024-06-10 11:52:34.296995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.356 [2024-06-10 11:52:34.297006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.356 [2024-06-10 11:52:34.297010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.356 [2024-06-10 11:52:34.297014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.356 [2024-06-10 11:52:34.297024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.356 qpair failed and we were unable to recover it. 00:44:05.356 [2024-06-10 11:52:34.306932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.356 [2024-06-10 11:52:34.306990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.356 [2024-06-10 11:52:34.307000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.356 [2024-06-10 11:52:34.307005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.356 [2024-06-10 11:52:34.307012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.356 [2024-06-10 11:52:34.307022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.356 qpair failed and we were unable to recover it. 00:44:05.356 [2024-06-10 11:52:34.317022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.356 [2024-06-10 11:52:34.317078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.356 [2024-06-10 11:52:34.317089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.356 [2024-06-10 11:52:34.317093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.356 [2024-06-10 11:52:34.317098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.356 [2024-06-10 11:52:34.317108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.356 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.327042] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.327211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.327222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.327227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.327231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.327241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.337063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.337147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.337158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.337163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.337167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.337176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.347068] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.347123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.347134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.347139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.347143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.347153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.357120] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.357197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.357208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.357213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.357217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.357227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.367175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.367250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.367261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.367266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.367270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.367280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.377076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.377128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.377139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.377144] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.377148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.377158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.387181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.387237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.387248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.387252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.387256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.387266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.397236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.397305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.397318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.397328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.397333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.397343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.407259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.407320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.407333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.407338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.407342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.619 [2024-06-10 11:52:34.407352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.619 qpair failed and we were unable to recover it. 00:44:05.619 [2024-06-10 11:52:34.417297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.619 [2024-06-10 11:52:34.417385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.619 [2024-06-10 11:52:34.417404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.619 [2024-06-10 11:52:34.417410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.619 [2024-06-10 11:52:34.417415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.417428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.427320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.427415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.427435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.427441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.427446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.427459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.437349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.437408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.437427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.437432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.437437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.437450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.447369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.447436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.447448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.447453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.447458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.447469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.457403] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.457467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.457479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.457484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.457489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.457499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.467449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.467519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.467531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.467536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.467540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.467550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.477365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.477423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.477434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.477439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.477444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.477453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.487560] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.487663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.487682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.487687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.487691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.487703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.497499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.497551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.497562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.497567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.497571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.497581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.507547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.507599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.507611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.507615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.507620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.507630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.517618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.517724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.517735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.517740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.517744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.517754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.527484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.527544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.527555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.527560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.527564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.620 [2024-06-10 11:52:34.527577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.620 qpair failed and we were unable to recover it. 00:44:05.620 [2024-06-10 11:52:34.537634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.620 [2024-06-10 11:52:34.537692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.620 [2024-06-10 11:52:34.537703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.620 [2024-06-10 11:52:34.537708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.620 [2024-06-10 11:52:34.537712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.537722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.621 [2024-06-10 11:52:34.547661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.621 [2024-06-10 11:52:34.547719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.621 [2024-06-10 11:52:34.547730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.621 [2024-06-10 11:52:34.547735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.621 [2024-06-10 11:52:34.547740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.547750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.621 [2024-06-10 11:52:34.557650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.621 [2024-06-10 11:52:34.557709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.621 [2024-06-10 11:52:34.557720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.621 [2024-06-10 11:52:34.557725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.621 [2024-06-10 11:52:34.557729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.557739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.621 [2024-06-10 11:52:34.567710] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.621 [2024-06-10 11:52:34.567772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.621 [2024-06-10 11:52:34.567783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.621 [2024-06-10 11:52:34.567788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.621 [2024-06-10 11:52:34.567792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.567802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.621 [2024-06-10 11:52:34.577607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.621 [2024-06-10 11:52:34.577668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.621 [2024-06-10 11:52:34.577685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.621 [2024-06-10 11:52:34.577690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.621 [2024-06-10 11:52:34.577694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.577705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.621 [2024-06-10 11:52:34.587721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.621 [2024-06-10 11:52:34.587779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.621 [2024-06-10 11:52:34.587791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.621 [2024-06-10 11:52:34.587796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.621 [2024-06-10 11:52:34.587800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.621 [2024-06-10 11:52:34.587810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.621 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.597785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.883 [2024-06-10 11:52:34.597844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.883 [2024-06-10 11:52:34.597855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.883 [2024-06-10 11:52:34.597860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.883 [2024-06-10 11:52:34.597864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.883 [2024-06-10 11:52:34.597874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.883 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.607822] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.883 [2024-06-10 11:52:34.607883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.883 [2024-06-10 11:52:34.607894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.883 [2024-06-10 11:52:34.607899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.883 [2024-06-10 11:52:34.607904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.883 [2024-06-10 11:52:34.607913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.883 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.617905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.883 [2024-06-10 11:52:34.617962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.883 [2024-06-10 11:52:34.617973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.883 [2024-06-10 11:52:34.617978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.883 [2024-06-10 11:52:34.617982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.883 [2024-06-10 11:52:34.617995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.883 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.627881] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.883 [2024-06-10 11:52:34.627935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.883 [2024-06-10 11:52:34.627946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.883 [2024-06-10 11:52:34.627951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.883 [2024-06-10 11:52:34.627955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.883 [2024-06-10 11:52:34.627965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.883 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.637876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.883 [2024-06-10 11:52:34.637933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.883 [2024-06-10 11:52:34.637944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.883 [2024-06-10 11:52:34.637949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.883 [2024-06-10 11:52:34.637953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.883 [2024-06-10 11:52:34.637963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.883 qpair failed and we were unable to recover it. 00:44:05.883 [2024-06-10 11:52:34.647879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.647944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.647956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.647961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.647965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.647975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.657950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.658009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.658021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.658026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.658030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.658042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.668065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.668121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.668135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.668140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.668144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.668155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.678017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.678073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.678084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.678089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.678093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.678103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.688038] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.688095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.688107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.688112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.688116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.688126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.698117] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.698224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.698235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.698240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.698244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.698255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.708166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.708222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.708234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.708238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.708245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.708255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.718117] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.718217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.718228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.718233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.718237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.718247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.728149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.728203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.728214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.728219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.728223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.728233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.738186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.738240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.738251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.738255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.738259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.738270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.748180] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.748240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.748252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.748257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.748261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.748270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.758262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.758322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.758333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.758338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.758342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.758352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.768260] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.768338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.768356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.884 [2024-06-10 11:52:34.768362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.884 [2024-06-10 11:52:34.768367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.884 [2024-06-10 11:52:34.768380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.884 qpair failed and we were unable to recover it. 00:44:05.884 [2024-06-10 11:52:34.778187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.884 [2024-06-10 11:52:34.778244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.884 [2024-06-10 11:52:34.778256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.778261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.778265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.778276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.788284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.788378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.788390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.788395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.788399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.788409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.798332] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.798391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.798409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.798419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.798424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.798437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.808361] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.808425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.808443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.808449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.808454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.808467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.818406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.818467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.818488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.818494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.818498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.818512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.828421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.828482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.828500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.828506] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.828511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.828524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.838473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.838575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.838587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.838592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.838597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.838608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:05.885 [2024-06-10 11:52:34.848487] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:05.885 [2024-06-10 11:52:34.848573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:05.885 [2024-06-10 11:52:34.848586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:05.885 [2024-06-10 11:52:34.848591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:05.885 [2024-06-10 11:52:34.848595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:05.885 [2024-06-10 11:52:34.848605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:05.885 qpair failed and we were unable to recover it. 00:44:06.147 [2024-06-10 11:52:34.858509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.147 [2024-06-10 11:52:34.858599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.858611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.858616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.858620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.858631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.868530] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.868582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.868594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.868599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.868603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.868613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.878565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.878622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.878633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.878638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.878642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.878653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.888589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.888689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.888700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.888708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.888713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.888723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.898621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.898684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.898695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.898700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.898704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.898714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.908645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.908750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.908761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.908766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.908770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.908780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.918666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.918727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.918739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.918744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.918748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.918761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.928717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.928807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.928818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.928823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.928827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.928838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.938733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.938789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.938800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.938805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.938809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.938819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.948635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.948697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.948708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.948713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.948717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.948727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.958773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.958828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.958839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.958844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.958848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.958858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.968805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.968870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.968881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.968886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.968890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.968899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.978809] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.978862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.978878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.978883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.148 [2024-06-10 11:52:34.978887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.148 [2024-06-10 11:52:34.978897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.148 qpair failed and we were unable to recover it. 00:44:06.148 [2024-06-10 11:52:34.988856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.148 [2024-06-10 11:52:34.988934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.148 [2024-06-10 11:52:34.988945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.148 [2024-06-10 11:52:34.988950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:34.988954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:34.988964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:34.998913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:34.998968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:34.998979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:34.998984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:34.998988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:34.998999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.008935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.008998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.009009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.009014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.009018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.009028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.018963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.019041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.019052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.019057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.019062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.019074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.029013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.029089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.029100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.029104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.029108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.029118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.038991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.039050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.039062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.039066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.039070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.039080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.049032] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.049135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.049146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.049150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.049154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.049164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.059072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.059158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.059169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.059174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.059178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.059188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.069087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.069145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.069160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.069165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.069169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.069182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.079115] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.079170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.079181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.079186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.079190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.079201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.089022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.089084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.089095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.089099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.089104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.089113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.099128] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.099227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.099238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.099243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.099247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.099258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.149 [2024-06-10 11:52:35.109174] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.149 [2024-06-10 11:52:35.109226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.149 [2024-06-10 11:52:35.109237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.149 [2024-06-10 11:52:35.109242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.149 [2024-06-10 11:52:35.109249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.149 [2024-06-10 11:52:35.109259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.149 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.119217] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.119274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.119285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.119290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.411 [2024-06-10 11:52:35.119294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.411 [2024-06-10 11:52:35.119304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.411 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.129247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.129307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.129319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.129324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.411 [2024-06-10 11:52:35.129328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.411 [2024-06-10 11:52:35.129337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.411 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.139225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.139288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.139300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.139305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.411 [2024-06-10 11:52:35.139309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.411 [2024-06-10 11:52:35.139319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.411 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.149307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.149356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.149367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.149372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.411 [2024-06-10 11:52:35.149376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.411 [2024-06-10 11:52:35.149386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.411 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.159199] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.159258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.159270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.159275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.411 [2024-06-10 11:52:35.159279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.411 [2024-06-10 11:52:35.159289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.411 qpair failed and we were unable to recover it. 00:44:06.411 [2024-06-10 11:52:35.169366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.411 [2024-06-10 11:52:35.169457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.411 [2024-06-10 11:52:35.169468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.411 [2024-06-10 11:52:35.169473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.169477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.169487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.179369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.179422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.179433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.179438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.179442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.179452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.189383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.189434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.189445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.189449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.189454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.189464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.199427] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.199483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.199494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.199502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.199506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.199516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.209499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.209562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.209573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.209578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.209582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.209592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.219492] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.219544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.219555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.219560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.219564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.219574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.229507] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.229601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.229612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.229616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.229621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.229631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.239546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.239605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.239616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.239621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.239625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.239635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.249567] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.249626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.249637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.249642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.249646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.249656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.259602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.259654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.259665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.259673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.259677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.259687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.269646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.269728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.269740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.269745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.269749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.269759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.279660] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.279720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.279732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.279737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.279741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.279751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.289739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.289845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.289856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.289864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.289869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.289879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.412 qpair failed and we were unable to recover it. 00:44:06.412 [2024-06-10 11:52:35.299707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.412 [2024-06-10 11:52:35.299761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.412 [2024-06-10 11:52:35.299772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.412 [2024-06-10 11:52:35.299777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.412 [2024-06-10 11:52:35.299781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.412 [2024-06-10 11:52:35.299791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.309729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.309782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.309793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.309798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.309802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.309812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.319769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.319825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.319836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.319840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.319845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.319855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.329775] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.329872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.329882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.329887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.329891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.329901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.339694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.339747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.339758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.339763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.339767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.339777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.349716] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.349774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.349785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.349790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.349794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.349804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.359881] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.359985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.359996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.360001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.360005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.360015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.369891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.369952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.369963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.369968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.369972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.369981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.413 [2024-06-10 11:52:35.379909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.413 [2024-06-10 11:52:35.379963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.413 [2024-06-10 11:52:35.379977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.413 [2024-06-10 11:52:35.379982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.413 [2024-06-10 11:52:35.379986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.413 [2024-06-10 11:52:35.379995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.413 qpair failed and we were unable to recover it. 00:44:06.674 [2024-06-10 11:52:35.389953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.674 [2024-06-10 11:52:35.390006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.674 [2024-06-10 11:52:35.390017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.674 [2024-06-10 11:52:35.390022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.674 [2024-06-10 11:52:35.390026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.674 [2024-06-10 11:52:35.390036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.399866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.399918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.399929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.399933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.399937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.399947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.409995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.410096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.410107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.410111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.410115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.410125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.420018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.420085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.420095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.420100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.420104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.420117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.430057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.430105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.430116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.430121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.430125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.430135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.439985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.440043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.440054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.440059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.440063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.440073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.450128] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.450190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.450201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.450205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.450210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.450219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.460149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.460200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.460212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.460216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.460221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.460230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.470195] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.470281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.470295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.470300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.470304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.470314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.480178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.480235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.480246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.480251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.480255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.480264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.490213] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.490278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.490289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.490294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.490298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.490308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.500307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.500368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.500379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.500384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.500388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.500398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.510250] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.510303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.510314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.510319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.510326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.675 [2024-06-10 11:52:35.510336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.675 qpair failed and we were unable to recover it. 00:44:06.675 [2024-06-10 11:52:35.520322] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.675 [2024-06-10 11:52:35.520377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.675 [2024-06-10 11:52:35.520389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.675 [2024-06-10 11:52:35.520394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.675 [2024-06-10 11:52:35.520398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.520408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.530347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.530413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.530432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.530438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.530443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.530456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.540373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.540465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.540478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.540483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.540488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.540499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.550391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.550468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.550479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.550484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.550488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.550499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.560447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.560512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.560524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.560529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.560534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.560544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.570475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.570538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.570549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.570554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.570558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.570568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.580459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.580534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.580546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.580551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.580555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.580566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.590398] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.590447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.590458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.590463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.590467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.590477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.600548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.600604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.600615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.600620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.600628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.600638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.610540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.610596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.610607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.610612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.610617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.610626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.620570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.620665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.620680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.620685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.620689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.620699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.630614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.630667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.630683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.630688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.630692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.630702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.676 [2024-06-10 11:52:35.640644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.676 [2024-06-10 11:52:35.640703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.676 [2024-06-10 11:52:35.640715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.676 [2024-06-10 11:52:35.640720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.676 [2024-06-10 11:52:35.640724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.676 [2024-06-10 11:52:35.640734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.676 qpair failed and we were unable to recover it. 00:44:06.938 [2024-06-10 11:52:35.650683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.938 [2024-06-10 11:52:35.650750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.938 [2024-06-10 11:52:35.650761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.938 [2024-06-10 11:52:35.650766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.938 [2024-06-10 11:52:35.650770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.938 [2024-06-10 11:52:35.650780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.938 qpair failed and we were unable to recover it. 00:44:06.938 [2024-06-10 11:52:35.660750] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.938 [2024-06-10 11:52:35.660832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.938 [2024-06-10 11:52:35.660843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.938 [2024-06-10 11:52:35.660848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.938 [2024-06-10 11:52:35.660852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.938 [2024-06-10 11:52:35.660862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.670738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.670793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.670804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.670809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.670813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.670822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.680802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.680862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.680874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.680878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.680883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.680893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.690830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.690925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.690936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.690946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.690950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.690961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.700786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.700840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.700851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.700856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.700860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.700870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.710837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.710893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.710904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.710909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.710913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.710923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.720870] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.720926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.720937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.720942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.720946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.720956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.730869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.730940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.730951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.730956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.730960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.730969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.740920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.740976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.740987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.740992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.740996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.741005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.750954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.751040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.751051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.751056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.751060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.751070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.761015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.761079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.761090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.761095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.761099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.761109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.771046] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.771105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.771116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.771121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.771125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.771135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.781037] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.781089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.781103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.781108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.781112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.939 [2024-06-10 11:52:35.781122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.939 qpair failed and we were unable to recover it. 00:44:06.939 [2024-06-10 11:52:35.791060] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.939 [2024-06-10 11:52:35.791114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.939 [2024-06-10 11:52:35.791125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.939 [2024-06-10 11:52:35.791130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.939 [2024-06-10 11:52:35.791134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.791144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.801091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.801154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.801167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.801172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.801178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.801191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.811162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.811229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.811240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.811245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.811249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.811259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.821150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.821205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.821216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.821221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.821225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.821238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.831190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.831295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.831307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.831311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.831315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.831325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.841201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.841256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.841266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.841271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.841275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.841285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.851203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.851272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.851283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.851288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.851292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.851302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.861282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.861339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.861357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.861363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.861367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.861380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.871280] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.871344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.871365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.871371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.871376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.871389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.881406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.881466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.881483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.881489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.881493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.881507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.891350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.891412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.891424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.891429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.891433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.891445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:06.940 [2024-06-10 11:52:35.901371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:06.940 [2024-06-10 11:52:35.901423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:06.940 [2024-06-10 11:52:35.901435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:06.940 [2024-06-10 11:52:35.901440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:06.940 [2024-06-10 11:52:35.901444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:06.940 [2024-06-10 11:52:35.901454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:06.940 qpair failed and we were unable to recover it. 00:44:07.202 [2024-06-10 11:52:35.911393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.202 [2024-06-10 11:52:35.911448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.202 [2024-06-10 11:52:35.911459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.202 [2024-06-10 11:52:35.911464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.202 [2024-06-10 11:52:35.911469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.202 [2024-06-10 11:52:35.911483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.202 qpair failed and we were unable to recover it. 00:44:07.202 [2024-06-10 11:52:35.921432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.202 [2024-06-10 11:52:35.921489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.202 [2024-06-10 11:52:35.921500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.202 [2024-06-10 11:52:35.921505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.202 [2024-06-10 11:52:35.921509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.202 [2024-06-10 11:52:35.921519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.202 qpair failed and we were unable to recover it. 00:44:07.202 [2024-06-10 11:52:35.931468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.202 [2024-06-10 11:52:35.931528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.202 [2024-06-10 11:52:35.931540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.202 [2024-06-10 11:52:35.931544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.202 [2024-06-10 11:52:35.931548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.931558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.941535] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.941604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.941615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.941620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.941625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.941635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.951518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.951570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.951581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.951586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.951590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.951600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.961546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.961614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.961626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.961631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.961635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.961645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.971553] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.971621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.971632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.971637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.971641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.971651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.981582] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.981636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.981647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.981652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.981656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.981666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:35.991602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:35.991658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:35.991672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:35.991677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:35.991681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:35.991691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.001641] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.001697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.001709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.001714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.001721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.001733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.011682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.011740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.011751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.011756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.011761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.011771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.021676] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.021735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.021746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.021751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.021755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.021765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.031732] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.031787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.031798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.031803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.031807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.031817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.041731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.041790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.041801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.041806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.041810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.041820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.051783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.051844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.051855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.051860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.051864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.051874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.061687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.061754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.203 [2024-06-10 11:52:36.061765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.203 [2024-06-10 11:52:36.061770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.203 [2024-06-10 11:52:36.061774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.203 [2024-06-10 11:52:36.061784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.203 qpair failed and we were unable to recover it. 00:44:07.203 [2024-06-10 11:52:36.071875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.203 [2024-06-10 11:52:36.071935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.071946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.071951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.071955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.071965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.081869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.081927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.081938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.081943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.081947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.081957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.091920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.092076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.092087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.092095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.092099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.092109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.101912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.101968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.101979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.101985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.101989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.102001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.111955] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.112009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.112020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.112025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.112029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.112038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.121990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.122045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.122056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.122061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.122065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.122074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.132015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.132073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.132084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.132089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.132093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.132103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.141965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.142020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.142032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.142038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.142044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.142056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.152050] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.152118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.152129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.152134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.152138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.152148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.162102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.162197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.162208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.162213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.162217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.162227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.204 [2024-06-10 11:52:36.172125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.204 [2024-06-10 11:52:36.172187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.204 [2024-06-10 11:52:36.172199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.204 [2024-06-10 11:52:36.172203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.204 [2024-06-10 11:52:36.172208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.204 [2024-06-10 11:52:36.172218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.204 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.182225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.182284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.182298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.182303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.182307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.182317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.192188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.192244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.192255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.192260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.192264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.192274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.202101] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.202158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.202170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.202175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.202179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.202189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.212251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.212315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.212327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.212333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.212337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.212348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.222304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.222362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.222373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.222379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.222383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.222396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.232293] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.232349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.232360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.232365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.232369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.232379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.242317] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.242375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.242386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.242391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.242395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.242406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.252351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.252409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.252420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.252425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.252430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.252440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.262442] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.262511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.262522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.262527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.262531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.262541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.467 [2024-06-10 11:52:36.272417] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.467 [2024-06-10 11:52:36.272473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.467 [2024-06-10 11:52:36.272487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.467 [2024-06-10 11:52:36.272492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.467 [2024-06-10 11:52:36.272496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.467 [2024-06-10 11:52:36.272507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.467 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.282446] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.282518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.282529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.282534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.282538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.282548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.292518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.292582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.292593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.292598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.292602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.292612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.302500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.302554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.302565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.302570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.302574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.302584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.312430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.312482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.312493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.312498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.312502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.312515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.322472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.322542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.322553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.322558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.322562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.322573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.332580] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.332696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.332708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.332713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.332717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.332727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.342573] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.342627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.342638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.342643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.342647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.342657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.352621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.352684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.352696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.352700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.352704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.352714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.362699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.362764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.362778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.362782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.362787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.362797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.372708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.372771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.372782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.372786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.372791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.372800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.382746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.382805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.382816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.382821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.382825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.382835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.392759] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.392812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.392823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.392828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.392832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.392842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.402770] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.402828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.402839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.402844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.468 [2024-06-10 11:52:36.402854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.468 [2024-06-10 11:52:36.402864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.468 qpair failed and we were unable to recover it. 00:44:07.468 [2024-06-10 11:52:36.412879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.468 [2024-06-10 11:52:36.412947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.468 [2024-06-10 11:52:36.412958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.468 [2024-06-10 11:52:36.412963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.469 [2024-06-10 11:52:36.412967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.469 [2024-06-10 11:52:36.412976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.469 qpair failed and we were unable to recover it. 00:44:07.469 [2024-06-10 11:52:36.422853] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.469 [2024-06-10 11:52:36.422908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.469 [2024-06-10 11:52:36.422919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.469 [2024-06-10 11:52:36.422924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.469 [2024-06-10 11:52:36.422928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.469 [2024-06-10 11:52:36.422938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.469 qpair failed and we were unable to recover it. 00:44:07.469 [2024-06-10 11:52:36.432935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.469 [2024-06-10 11:52:36.432998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.469 [2024-06-10 11:52:36.433009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.469 [2024-06-10 11:52:36.433014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.469 [2024-06-10 11:52:36.433018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.469 [2024-06-10 11:52:36.433028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.469 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.442827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.442884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.442895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.442900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.442904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.442914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.452932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.452991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.453003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.453007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.453011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.453021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.462989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.463056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.463067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.463072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.463076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.463086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.472968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.473027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.473039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.473043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.473048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.473057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.483009] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.483064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.483075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.483080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.483084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.483094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.492981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.493044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.493055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.731 [2024-06-10 11:52:36.493063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.731 [2024-06-10 11:52:36.493067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.731 [2024-06-10 11:52:36.493077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.731 qpair failed and we were unable to recover it. 00:44:07.731 [2024-06-10 11:52:36.503075] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.731 [2024-06-10 11:52:36.503135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.731 [2024-06-10 11:52:36.503146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.503151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.503155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.503165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.513093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.513147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.513158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.513163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.513167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.513177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.523112] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.523191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.523203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.523207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.523212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.523222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.533110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.533170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.533181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.533186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.533190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.533200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.543245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.543303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.543314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.543319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.543323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.543333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.553191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.553241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.553253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.553258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.553264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.553274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.563219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.563273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.563284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.563289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.563293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.563303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.573247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.573311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.573322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.573327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.573331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.573341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.583275] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.583325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.583336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.583344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.583348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.583358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.593289] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.593345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.593357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.593362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.593366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.593376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.603344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.603423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.603441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.603447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.603452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.603465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.613359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.613427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.613446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.613452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.613456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.613470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.623389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.623450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.623468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.623474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.623479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.623492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.633397] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.732 [2024-06-10 11:52:36.633456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.732 [2024-06-10 11:52:36.633475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.732 [2024-06-10 11:52:36.633481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.732 [2024-06-10 11:52:36.633485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.732 [2024-06-10 11:52:36.633498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.732 qpair failed and we were unable to recover it. 00:44:07.732 [2024-06-10 11:52:36.643470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.643524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.643537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.643542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.643547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.643557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.733 [2024-06-10 11:52:36.653458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.653521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.653533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.653538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.653543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.653553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.733 [2024-06-10 11:52:36.663374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.663427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.663439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.663444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.663448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.663458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.733 [2024-06-10 11:52:36.673548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.673599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.673614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.673619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.673623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.673633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.733 [2024-06-10 11:52:36.683556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.683612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.683623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.683628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.683632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.683642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.733 [2024-06-10 11:52:36.693582] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.733 [2024-06-10 11:52:36.693684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.733 [2024-06-10 11:52:36.693696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.733 [2024-06-10 11:52:36.693701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.733 [2024-06-10 11:52:36.693705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.733 [2024-06-10 11:52:36.693715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.733 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.703613] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.703713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.703725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.703730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.703735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.703745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.713647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.713703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.713715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.713720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.713724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.713738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.723655] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.723716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.723727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.723731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.723736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.723746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.733684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.733742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.733753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.733758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.733762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.733772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.743597] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.743655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.743666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.743675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.743679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.743689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.753638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.753697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.753709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.753713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.753717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.753727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.763774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.763860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.763873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.763878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.763882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.763892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.773804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.773872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.773883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.773888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.773892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.773902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.783803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.783863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.783874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.783879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.783883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.783893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.793883] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.793939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.793950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.793955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.793959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.793969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.803827] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.803883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.803894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.803899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.996 [2024-06-10 11:52:36.803906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.996 [2024-06-10 11:52:36.803916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.996 qpair failed and we were unable to recover it. 00:44:07.996 [2024-06-10 11:52:36.813802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.996 [2024-06-10 11:52:36.813878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.996 [2024-06-10 11:52:36.813890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.996 [2024-06-10 11:52:36.813895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.813899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.813910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.823963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.824017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.824029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.824034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.824038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.824048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.834003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.834077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.834089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.834094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.834098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.834107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.844002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.844057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.844069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.844073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.844077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.844088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.854079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.854182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.854193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.854198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.854202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.854212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.864054] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.864109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.864120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.864125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.864129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.864139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.874086] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.874137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.874148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.874153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.874157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.874167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.884140] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.884231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.884242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.884247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.884251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.884261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.894197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.894305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.894317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.894325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.894329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.894339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.904076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.904176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.904188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.904195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.904201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6224000b90 00:44:07.997 [2024-06-10 11:52:36.904212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.914104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.914175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.914199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.914208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.914215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a07270 00:44:07.997 [2024-06-10 11:52:36.914234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.924218] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.997 [2024-06-10 11:52:36.924285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.997 [2024-06-10 11:52:36.924301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.997 [2024-06-10 11:52:36.924309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.997 [2024-06-10 11:52:36.924316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1a07270 00:44:07.997 [2024-06-10 11:52:36.924330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:44:07.997 qpair failed and we were unable to recover it. 00:44:07.997 [2024-06-10 11:52:36.924701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a14e30 is same with the state(5) to be set 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Write completed with error (sct=0, sc=8) 00:44:07.997 starting I/O failed 00:44:07.997 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 [2024-06-10 11:52:36.925264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:07.998 [2024-06-10 11:52:36.934266] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.998 [2024-06-10 11:52:36.934344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.998 [2024-06-10 11:52:36.934363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.998 [2024-06-10 11:52:36.934371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.998 [2024-06-10 11:52:36.934378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f622c000b90 00:44:07.998 [2024-06-10 11:52:36.934394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:07.998 qpair failed and we were unable to recover it. 00:44:07.998 [2024-06-10 11:52:36.944262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.998 [2024-06-10 11:52:36.944336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.998 [2024-06-10 11:52:36.944360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.998 [2024-06-10 11:52:36.944369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.998 [2024-06-10 11:52:36.944375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f622c000b90 00:44:07.998 [2024-06-10 11:52:36.944394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:44:07.998 qpair failed and we were unable to recover it. 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Read completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 Write completed with error (sct=0, sc=8) 00:44:07.998 starting I/O failed 00:44:07.998 [2024-06-10 11:52:36.945373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:07.998 [2024-06-10 11:52:36.954295] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.998 [2024-06-10 11:52:36.954447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.998 [2024-06-10 11:52:36.954496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.998 [2024-06-10 11:52:36.954519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.998 [2024-06-10 11:52:36.954538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f621c000b90 00:44:07.998 [2024-06-10 11:52:36.954583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:07.998 qpair failed and we were unable to recover it. 00:44:07.998 [2024-06-10 11:52:36.964429] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:44:07.998 [2024-06-10 11:52:36.964565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:44:07.998 [2024-06-10 11:52:36.964595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:44:07.998 [2024-06-10 11:52:36.964610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:44:07.998 [2024-06-10 11:52:36.964624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f621c000b90 00:44:07.998 [2024-06-10 11:52:36.964653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:44:07.998 qpair failed and we were unable to recover it. 00:44:07.998 [2024-06-10 11:52:36.965190] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a14e30 (9): Bad file descriptor 00:44:08.259 Initializing NVMe Controllers 00:44:08.259 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:08.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:08.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:44:08.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:44:08.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:44:08.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:44:08.259 Initialization complete. Launching workers. 00:44:08.259 Starting thread on core 1 00:44:08.259 Starting thread on core 2 00:44:08.259 Starting thread on core 3 00:44:08.259 Starting thread on core 0 00:44:08.259 11:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:44:08.259 00:44:08.259 real 0m11.509s 00:44:08.259 user 0m21.614s 00:44:08.259 sys 0m3.676s 00:44:08.259 11:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:08.259 11:52:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:44:08.259 ************************************ 00:44:08.259 END TEST nvmf_target_disconnect_tc2 00:44:08.259 ************************************ 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:08.259 rmmod nvme_tcp 00:44:08.259 rmmod nvme_fabrics 00:44:08.259 rmmod nvme_keyring 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2520874 ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2520874 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2520874 ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 2520874 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2520874 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2520874' 00:44:08.259 killing process with pid 2520874 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 2520874 00:44:08.259 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 2520874 00:44:08.520 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:08.521 11:52:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:10.431 11:52:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:10.431 00:44:10.431 real 0m21.413s 00:44:10.431 user 0m49.397s 00:44:10.431 sys 0m9.481s 00:44:10.431 11:52:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:10.431 11:52:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:44:10.431 ************************************ 00:44:10.431 END TEST nvmf_target_disconnect 00:44:10.431 ************************************ 00:44:10.431 11:52:39 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:44:10.431 11:52:39 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:10.431 11:52:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.693 11:52:39 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:44:10.693 00:44:10.693 real 22m33.144s 00:44:10.693 user 48m13.966s 00:44:10.693 sys 7m2.786s 00:44:10.693 11:52:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:10.693 11:52:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.693 ************************************ 00:44:10.693 END TEST nvmf_tcp 00:44:10.693 ************************************ 00:44:10.693 11:52:39 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:44:10.693 11:52:39 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:10.693 11:52:39 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:44:10.693 11:52:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:10.693 11:52:39 -- common/autotest_common.sh@10 -- # set +x 00:44:10.693 ************************************ 00:44:10.693 START TEST spdkcli_nvmf_tcp 00:44:10.693 ************************************ 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:10.693 * Looking for test storage... 00:44:10.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:10.693 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2522736 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2522736 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 2522736 ']' 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:10.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:10.694 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.955 [2024-06-10 11:52:39.703553] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:44:10.955 [2024-06-10 11:52:39.703622] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522736 ] 00:44:10.955 EAL: No free 2048 kB hugepages reported on node 1 00:44:10.955 [2024-06-10 11:52:39.769500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:10.955 [2024-06-10 11:52:39.844648] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:44:10.955 [2024-06-10 11:52:39.844653] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:10.955 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:10.955 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:44:10.955 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:10.955 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:10.955 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.216 11:52:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:11.216 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:11.216 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:11.216 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:11.216 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:11.216 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:11.216 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:11.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:11.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:11.216 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:11.216 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:11.216 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:11.216 ' 00:44:13.760 [2024-06-10 11:52:42.388020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:15.143 [2024-06-10 11:52:43.696161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:17.686 [2024-06-10 11:52:46.127387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:19.600 [2024-06-10 11:52:48.229705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:20.992 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:20.992 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:20.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:20.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:20.992 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:20.992 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:20.992 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:20.992 11:52:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:20.992 11:52:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:20.992 11:52:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.253 11:52:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:21.253 11:52:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:21.253 11:52:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.253 11:52:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:21.253 11:52:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:21.514 11:52:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:21.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:21.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:21.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:21.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:21.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:21.514 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:21.514 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:21.514 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:21.514 ' 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:28.097 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:28.097 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:28.097 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:28.097 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2522736 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2522736 ']' 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2522736 00:44:28.097 11:52:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2522736 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2522736' 00:44:28.097 killing process with pid 2522736 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 2522736 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 2522736 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2522736 ']' 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2522736 00:44:28.097 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2522736 ']' 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2522736 00:44:28.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2522736) - No such process 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 2522736 is not found' 00:44:28.098 Process with pid 2522736 is not found 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:28.098 00:44:28.098 real 0m16.675s 00:44:28.098 user 0m36.336s 00:44:28.098 sys 0m0.925s 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:28.098 11:52:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:28.098 ************************************ 00:44:28.098 END TEST spdkcli_nvmf_tcp 00:44:28.098 ************************************ 00:44:28.098 11:52:56 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:28.098 11:52:56 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:44:28.098 11:52:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:28.098 11:52:56 -- common/autotest_common.sh@10 -- # set +x 00:44:28.098 ************************************ 00:44:28.098 START TEST nvmf_identify_passthru 00:44:28.098 ************************************ 00:44:28.098 11:52:56 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:28.098 * Looking for test storage... 00:44:28.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:28.098 11:52:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:28.098 11:52:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:28.098 11:52:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:28.098 11:52:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.098 11:52:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:28.098 11:52:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:28.098 11:52:56 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:44:28.098 11:52:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:34.686 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:34.686 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:34.686 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:34.687 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:34.687 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:34.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:34.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:44:34.687 00:44:34.687 --- 10.0.0.2 ping statistics --- 00:44:34.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:34.687 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:34.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:34.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:44:34.687 00:44:34.687 --- 10.0.0.1 ping statistics --- 00:44:34.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:34.687 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:34.687 11:53:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:44:34.687 11:53:03 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:34.687 11:53:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:34.947 EAL: No free 2048 kB hugepages reported on node 1 00:44:35.207 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605480 00:44:35.207 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:35.207 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:35.207 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:35.207 EAL: No free 2048 kB hugepages reported on node 1 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2529797 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:35.862 11:53:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2529797 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 2529797 ']' 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:35.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:35.862 11:53:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.862 [2024-06-10 11:53:04.665066] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:44:35.862 [2024-06-10 11:53:04.665123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:35.862 EAL: No free 2048 kB hugepages reported on node 1 00:44:35.862 [2024-06-10 11:53:04.731758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:35.862 [2024-06-10 11:53:04.801589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:35.862 [2024-06-10 11:53:04.801628] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:35.862 [2024-06-10 11:53:04.801635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:35.862 [2024-06-10 11:53:04.801642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:35.862 [2024-06-10 11:53:04.801647] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:35.862 [2024-06-10 11:53:04.801701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:44:35.862 [2024-06-10 11:53:04.801790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:44:35.862 [2024-06-10 11:53:04.802277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:44:35.862 [2024-06-10 11:53:04.802277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:44:36.828 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.828 INFO: Log level set to 20 00:44:36.828 INFO: Requests: 00:44:36.828 { 00:44:36.828 "jsonrpc": "2.0", 00:44:36.828 "method": "nvmf_set_config", 00:44:36.828 "id": 1, 00:44:36.828 "params": { 00:44:36.828 "admin_cmd_passthru": { 00:44:36.828 "identify_ctrlr": true 00:44:36.828 } 00:44:36.828 } 00:44:36.828 } 00:44:36.828 00:44:36.828 INFO: response: 00:44:36.828 { 00:44:36.828 "jsonrpc": "2.0", 00:44:36.828 "id": 1, 00:44:36.828 "result": true 00:44:36.828 } 00:44:36.828 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:36.828 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.828 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.828 INFO: Setting log level to 20 00:44:36.828 INFO: Setting log level to 20 00:44:36.828 INFO: Log level set to 20 00:44:36.828 INFO: Log level set to 20 00:44:36.828 INFO: Requests: 00:44:36.828 { 00:44:36.828 "jsonrpc": "2.0", 00:44:36.828 "method": "framework_start_init", 00:44:36.828 "id": 1 00:44:36.828 } 00:44:36.828 00:44:36.829 INFO: Requests: 00:44:36.829 { 00:44:36.829 "jsonrpc": "2.0", 00:44:36.829 "method": "framework_start_init", 00:44:36.829 "id": 1 00:44:36.829 } 00:44:36.829 00:44:36.829 [2024-06-10 11:53:05.605096] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:36.829 INFO: response: 00:44:36.829 { 00:44:36.829 "jsonrpc": "2.0", 00:44:36.829 "id": 1, 00:44:36.829 "result": true 00:44:36.829 } 00:44:36.829 00:44:36.829 INFO: response: 00:44:36.829 { 00:44:36.829 "jsonrpc": "2.0", 00:44:36.829 "id": 1, 00:44:36.829 "result": true 00:44:36.829 } 00:44:36.829 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:36.829 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.829 INFO: Setting log level to 40 00:44:36.829 INFO: Setting log level to 40 00:44:36.829 INFO: Setting log level to 40 00:44:36.829 [2024-06-10 11:53:05.618328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:36.829 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.829 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.829 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.090 Nvme0n1 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.090 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.090 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.090 11:53:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.090 11:53:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.090 [2024-06-10 11:53:05.999931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:37.090 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.090 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:37.090 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.090 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.090 [ 00:44:37.090 { 00:44:37.090 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:37.090 "subtype": "Discovery", 00:44:37.090 "listen_addresses": [], 00:44:37.090 "allow_any_host": true, 00:44:37.090 "hosts": [] 00:44:37.090 }, 00:44:37.090 { 00:44:37.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:37.090 "subtype": "NVMe", 00:44:37.090 "listen_addresses": [ 00:44:37.090 { 00:44:37.090 "trtype": "TCP", 00:44:37.090 "adrfam": "IPv4", 00:44:37.090 "traddr": "10.0.0.2", 00:44:37.090 "trsvcid": "4420" 00:44:37.090 } 00:44:37.090 ], 00:44:37.090 "allow_any_host": true, 00:44:37.090 "hosts": [], 00:44:37.090 "serial_number": "SPDK00000000000001", 00:44:37.090 "model_number": "SPDK bdev Controller", 00:44:37.090 "max_namespaces": 1, 00:44:37.090 "min_cntlid": 1, 00:44:37.090 "max_cntlid": 65519, 00:44:37.090 "namespaces": [ 00:44:37.090 { 00:44:37.090 "nsid": 1, 00:44:37.090 "bdev_name": "Nvme0n1", 00:44:37.090 "name": "Nvme0n1", 00:44:37.090 "nguid": "36344730526054800025384500000047", 00:44:37.090 "uuid": "36344730-5260-5480-0025-384500000047" 00:44:37.090 } 00:44:37.090 ] 00:44:37.090 } 00:44:37.090 ] 00:44:37.090 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.090 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:37.090 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:37.090 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:37.090 EAL: No free 2048 kB hugepages reported on node 1 00:44:37.351 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605480 00:44:37.351 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:37.351 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:37.351 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:37.612 EAL: No free 2048 kB hugepages reported on node 1 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605480 '!=' S64GNE0R605480 ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:37.612 11:53:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:37.612 rmmod nvme_tcp 00:44:37.612 rmmod nvme_fabrics 00:44:37.612 rmmod nvme_keyring 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2529797 ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2529797 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 2529797 ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 2529797 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:37.612 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2529797 00:44:37.873 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:44:37.873 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:44:37.873 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2529797' 00:44:37.873 killing process with pid 2529797 00:44:37.873 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 2529797 00:44:37.873 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 2529797 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:38.134 11:53:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.134 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:38.134 11:53:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:40.045 11:53:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:40.045 00:44:40.045 real 0m12.679s 00:44:40.045 user 0m10.630s 00:44:40.045 sys 0m6.002s 00:44:40.045 11:53:08 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:40.045 11:53:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:40.045 ************************************ 00:44:40.045 END TEST nvmf_identify_passthru 00:44:40.045 ************************************ 00:44:40.045 11:53:08 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:40.045 11:53:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:40.045 11:53:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:40.045 11:53:08 -- common/autotest_common.sh@10 -- # set +x 00:44:40.307 ************************************ 00:44:40.307 START TEST nvmf_dif 00:44:40.307 ************************************ 00:44:40.307 11:53:09 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:40.307 * Looking for test storage... 00:44:40.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:40.307 11:53:09 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:40.307 11:53:09 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:40.307 11:53:09 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:40.307 11:53:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.307 11:53:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.307 11:53:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.307 11:53:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:40.307 11:53:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:40.307 11:53:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:40.307 11:53:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:40.307 11:53:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:40.307 11:53:09 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:44:40.307 11:53:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:46.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:46.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:46.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:46.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:46.894 11:53:15 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:46.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:46.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:44:46.894 00:44:46.894 --- 10.0.0.2 ping statistics --- 00:44:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.894 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:44:46.895 11:53:15 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:46.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:46.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:44:46.895 00:44:46.895 --- 10.0.0.1 ping statistics --- 00:44:46.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.895 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:44:46.895 11:53:15 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:46.895 11:53:15 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:44:46.895 11:53:15 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:44:46.895 11:53:15 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:50.192 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:50.192 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:50.192 11:53:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:50.192 11:53:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2535753 00:44:50.192 11:53:18 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2535753 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 2535753 ']' 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:50.192 11:53:18 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:50.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:50.193 11:53:18 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:50.193 11:53:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.193 11:53:18 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:50.193 [2024-06-10 11:53:19.046796] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:44:50.193 [2024-06-10 11:53:19.046843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:50.193 EAL: No free 2048 kB hugepages reported on node 1 00:44:50.193 [2024-06-10 11:53:19.110700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.454 [2024-06-10 11:53:19.175255] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:50.454 [2024-06-10 11:53:19.175289] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:50.454 [2024-06-10 11:53:19.175296] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:50.454 [2024-06-10 11:53:19.175303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:50.454 [2024-06-10 11:53:19.175309] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:50.454 [2024-06-10 11:53:19.175331] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:44:50.454 11:53:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 11:53:19 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:50.454 11:53:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:50.454 11:53:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 [2024-06-10 11:53:19.304766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.454 11:53:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 ************************************ 00:44:50.454 START TEST fio_dif_1_default 00:44:50.454 ************************************ 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 bdev_null0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.454 [2024-06-10 11:53:19.373046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:50.454 { 00:44:50.454 "params": { 00:44:50.454 "name": "Nvme$subsystem", 00:44:50.454 "trtype": "$TEST_TRANSPORT", 00:44:50.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:50.454 "adrfam": "ipv4", 00:44:50.454 "trsvcid": "$NVMF_PORT", 00:44:50.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:50.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:50.454 "hdgst": ${hdgst:-false}, 00:44:50.454 "ddgst": ${ddgst:-false} 00:44:50.454 }, 00:44:50.454 "method": "bdev_nvme_attach_controller" 00:44:50.454 } 00:44:50.454 EOF 00:44:50.454 )") 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:44:50.454 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:44:50.455 "params": { 00:44:50.455 "name": "Nvme0", 00:44:50.455 "trtype": "tcp", 00:44:50.455 "traddr": "10.0.0.2", 00:44:50.455 "adrfam": "ipv4", 00:44:50.455 "trsvcid": "4420", 00:44:50.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:50.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:50.455 "hdgst": false, 00:44:50.455 "ddgst": false 00:44:50.455 }, 00:44:50.455 "method": "bdev_nvme_attach_controller" 00:44:50.455 }' 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:44:50.455 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:44:50.738 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:44:50.738 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:44:50.738 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:50.738 11:53:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:51.002 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:51.002 fio-3.35 00:44:51.002 Starting 1 thread 00:44:51.002 EAL: No free 2048 kB hugepages reported on node 1 00:45:03.235 00:45:03.235 filename0: (groupid=0, jobs=1): err= 0: pid=2536153: Mon Jun 10 11:53:30 2024 00:45:03.235 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:45:03.235 slat (nsec): min=7837, max=31883, avg=8058.58, stdev=1081.34 00:45:03.235 clat (usec): min=40967, max=44875, avg=41977.76, stdev=216.67 00:45:03.235 lat (usec): min=40975, max=44907, avg=41985.82, stdev=217.17 00:45:03.235 clat percentiles (usec): 00:45:03.235 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:45:03.235 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:45:03.235 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:03.235 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:45:03.235 | 99.99th=[44827] 00:45:03.235 bw ( KiB/s): min= 352, max= 384, per=99.75%, avg=380.80, stdev= 9.85, samples=20 00:45:03.235 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:45:03.235 lat (msec) : 50=100.00% 00:45:03.235 cpu : usr=95.85%, sys=3.95%, ctx=9, majf=0, minf=211 00:45:03.235 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:03.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.235 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:03.235 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:03.235 00:45:03.235 Run status group 0 (all jobs): 00:45:03.235 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10038-10038msec 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 00:45:03.235 real 0m11.095s 00:45:03.235 user 0m21.898s 00:45:03.235 sys 0m0.688s 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 ************************************ 00:45:03.235 END TEST fio_dif_1_default 00:45:03.235 ************************************ 00:45:03.235 11:53:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:03.235 11:53:30 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:03.235 11:53:30 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 ************************************ 00:45:03.235 START TEST fio_dif_1_multi_subsystems 00:45:03.235 ************************************ 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 bdev_null0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 [2024-06-10 11:53:30.546998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 bdev_null1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:03.235 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:03.236 { 00:45:03.236 "params": { 00:45:03.236 "name": "Nvme$subsystem", 00:45:03.236 "trtype": "$TEST_TRANSPORT", 00:45:03.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.236 "adrfam": "ipv4", 00:45:03.236 "trsvcid": "$NVMF_PORT", 00:45:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.236 "hdgst": ${hdgst:-false}, 00:45:03.236 "ddgst": ${ddgst:-false} 00:45:03.236 }, 00:45:03.236 "method": "bdev_nvme_attach_controller" 00:45:03.236 } 00:45:03.236 EOF 00:45:03.236 )") 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:03.236 { 00:45:03.236 "params": { 00:45:03.236 "name": "Nvme$subsystem", 00:45:03.236 "trtype": "$TEST_TRANSPORT", 00:45:03.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.236 "adrfam": "ipv4", 00:45:03.236 "trsvcid": "$NVMF_PORT", 00:45:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.236 "hdgst": ${hdgst:-false}, 00:45:03.236 "ddgst": ${ddgst:-false} 00:45:03.236 }, 00:45:03.236 "method": "bdev_nvme_attach_controller" 00:45:03.236 } 00:45:03.236 EOF 00:45:03.236 )") 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:03.236 "params": { 00:45:03.236 "name": "Nvme0", 00:45:03.236 "trtype": "tcp", 00:45:03.236 "traddr": "10.0.0.2", 00:45:03.236 "adrfam": "ipv4", 00:45:03.236 "trsvcid": "4420", 00:45:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:03.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:03.236 "hdgst": false, 00:45:03.236 "ddgst": false 00:45:03.236 }, 00:45:03.236 "method": "bdev_nvme_attach_controller" 00:45:03.236 },{ 00:45:03.236 "params": { 00:45:03.236 "name": "Nvme1", 00:45:03.236 "trtype": "tcp", 00:45:03.236 "traddr": "10.0.0.2", 00:45:03.236 "adrfam": "ipv4", 00:45:03.236 "trsvcid": "4420", 00:45:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:03.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:03.236 "hdgst": false, 00:45:03.236 "ddgst": false 00:45:03.236 }, 00:45:03.236 "method": "bdev_nvme_attach_controller" 00:45:03.236 }' 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:03.236 11:53:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.236 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:03.236 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:03.236 fio-3.35 00:45:03.236 Starting 2 threads 00:45:03.236 EAL: No free 2048 kB hugepages reported on node 1 00:45:13.237 00:45:13.237 filename0: (groupid=0, jobs=1): err= 0: pid=2538603: Mon Jun 10 11:53:41 2024 00:45:13.237 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10031msec) 00:45:13.237 slat (nsec): min=7839, max=39852, avg=8182.58, stdev=1319.61 00:45:13.237 clat (usec): min=40914, max=42378, avg=41945.76, stdev=186.95 00:45:13.237 lat (usec): min=40922, max=42418, avg=41953.94, stdev=186.97 00:45:13.237 clat percentiles (usec): 00:45:13.237 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:45:13.237 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:45:13.237 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:13.237 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:13.237 | 99.99th=[42206] 00:45:13.237 bw ( KiB/s): min= 352, max= 384, per=49.64%, avg=380.80, stdev= 9.85, samples=20 00:45:13.237 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:45:13.237 lat (msec) : 50=100.00% 00:45:13.237 cpu : usr=96.97%, sys=2.81%, ctx=14, majf=0, minf=178 00:45:13.237 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:13.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.237 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:13.237 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:13.237 filename1: (groupid=0, jobs=1): err= 0: pid=2538604: Mon Jun 10 11:53:41 2024 00:45:13.237 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10033msec) 00:45:13.237 slat (nsec): min=7850, max=33189, avg=8322.09, stdev=1716.09 00:45:13.237 clat (usec): min=1023, max=42117, avg=41603.79, stdev=3680.32 00:45:13.237 lat (usec): min=1047, max=42126, avg=41612.12, stdev=3679.17 00:45:13.237 clat percentiles (usec): 00:45:13.237 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:45:13.237 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:45:13.238 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:13.238 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:45:13.238 | 99.99th=[42206] 00:45:13.238 bw ( KiB/s): min= 352, max= 416, per=50.17%, avg=384.00, stdev=10.38, samples=20 00:45:13.238 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:45:13.238 lat (msec) : 2=0.83%, 50=99.17% 00:45:13.238 cpu : usr=97.01%, sys=2.53%, ctx=29, majf=0, minf=57 00:45:13.238 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:13.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.238 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:13.238 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:13.238 00:45:13.238 Run status group 0 (all jobs): 00:45:13.238 READ: bw=765KiB/s (784kB/s), 381KiB/s-384KiB/s (390kB/s-394kB/s), io=7680KiB (7864kB), run=10031-10033msec 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 00:45:13.238 real 0m11.307s 00:45:13.238 user 0m34.502s 00:45:13.238 sys 0m0.816s 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 ************************************ 00:45:13.238 END TEST fio_dif_1_multi_subsystems 00:45:13.238 ************************************ 00:45:13.238 11:53:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:13.238 11:53:41 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:13.238 11:53:41 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 ************************************ 00:45:13.238 START TEST fio_dif_rand_params 00:45:13.238 ************************************ 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 bdev_null0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:13.238 [2024-06-10 11:53:41.950831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:13.238 { 00:45:13.238 "params": { 00:45:13.238 "name": "Nvme$subsystem", 00:45:13.238 "trtype": "$TEST_TRANSPORT", 00:45:13.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:13.238 "adrfam": "ipv4", 00:45:13.238 "trsvcid": "$NVMF_PORT", 00:45:13.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:13.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:13.238 "hdgst": ${hdgst:-false}, 00:45:13.238 "ddgst": ${ddgst:-false} 00:45:13.238 }, 00:45:13.238 "method": "bdev_nvme_attach_controller" 00:45:13.238 } 00:45:13.238 EOF 00:45:13.238 )") 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:13.238 11:53:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:13.238 "params": { 00:45:13.238 "name": "Nvme0", 00:45:13.238 "trtype": "tcp", 00:45:13.238 "traddr": "10.0.0.2", 00:45:13.238 "adrfam": "ipv4", 00:45:13.238 "trsvcid": "4420", 00:45:13.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:13.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:13.238 "hdgst": false, 00:45:13.238 "ddgst": false 00:45:13.238 }, 00:45:13.238 "method": "bdev_nvme_attach_controller" 00:45:13.238 }' 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:45:13.239 11:53:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:13.239 11:53:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:13.239 11:53:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:13.239 11:53:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:13.239 11:53:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:13.499 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:13.499 ... 00:45:13.499 fio-3.35 00:45:13.499 Starting 3 threads 00:45:13.499 EAL: No free 2048 kB hugepages reported on node 1 00:45:20.088 00:45:20.088 filename0: (groupid=0, jobs=1): err= 0: pid=2540864: Mon Jun 10 11:53:47 2024 00:45:20.088 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(123MiB/5010msec) 00:45:20.088 slat (nsec): min=7890, max=31555, avg=9033.26, stdev=1433.75 00:45:20.088 clat (usec): min=5960, max=56651, avg=15306.41, stdev=12152.58 00:45:20.088 lat (usec): min=5969, max=56660, avg=15315.44, stdev=12152.69 00:45:20.088 clat percentiles (usec): 00:45:20.088 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 8848], 00:45:20.088 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11863], 60.00th=[12780], 00:45:20.088 | 70.00th=[13829], 80.00th=[15401], 90.00th=[18482], 95.00th=[51119], 00:45:20.088 | 99.00th=[53740], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:45:20.088 | 99.99th=[56886] 00:45:20.088 bw ( KiB/s): min=15360, max=31232, per=30.58%, avg=25036.80, stdev=5198.75, samples=10 00:45:20.088 iops : min= 120, max= 244, avg=195.60, stdev=40.62, samples=10 00:45:20.088 lat (msec) : 10=32.52%, 20=57.70%, 50=3.26%, 100=6.52% 00:45:20.088 cpu : usr=95.75%, sys=3.97%, ctx=11, majf=0, minf=91 00:45:20.089 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 issued rwts: total=981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:20.089 filename0: (groupid=0, jobs=1): err= 0: pid=2540865: Mon Jun 10 11:53:47 2024 00:45:20.089 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5045msec) 00:45:20.089 slat (nsec): min=7855, max=34211, avg=8518.97, stdev=1140.85 00:45:20.089 clat (usec): min=5057, max=90531, avg=14105.53, stdev=12232.61 00:45:20.089 lat (usec): min=5065, max=90539, avg=14114.05, stdev=12232.61 00:45:20.089 clat percentiles (usec): 00:45:20.089 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 8291], 00:45:20.089 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:45:20.089 | 70.00th=[11994], 80.00th=[13435], 90.00th=[15664], 95.00th=[49021], 00:45:20.089 | 99.00th=[52167], 99.50th=[54789], 99.90th=[90702], 99.95th=[90702], 00:45:20.089 | 99.99th=[90702] 00:45:20.089 bw ( KiB/s): min=17664, max=38144, per=33.30%, avg=27259.90, stdev=6569.74, samples=10 00:45:20.089 iops : min= 138, max= 298, avg=212.90, stdev=51.41, samples=10 00:45:20.089 lat (msec) : 10=41.28%, 20=49.34%, 50=5.07%, 100=4.32% 00:45:20.089 cpu : usr=96.93%, sys=2.82%, ctx=6, majf=0, minf=120 00:45:20.089 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:20.089 filename0: (groupid=0, jobs=1): err= 0: pid=2540866: Mon Jun 10 11:53:47 2024 00:45:20.089 read: IOPS=234, BW=29.2MiB/s (30.7MB/s)(148MiB/5047msec) 00:45:20.089 slat (nsec): min=7855, max=31932, avg=8533.10, stdev=1016.14 00:45:20.089 clat (usec): min=4739, max=90425, avg=12808.23, stdev=12032.83 00:45:20.089 lat (usec): min=4748, max=90434, avg=12816.76, stdev=12032.94 00:45:20.089 clat percentiles (usec): 00:45:20.089 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7701], 00:45:20.089 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:45:20.089 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13566], 95.00th=[49021], 00:45:20.089 | 99.00th=[51643], 99.50th=[54264], 99.90th=[90702], 99.95th=[90702], 00:45:20.089 | 99.99th=[90702] 00:45:20.089 bw ( KiB/s): min=17920, max=40448, per=36.84%, avg=30156.80, stdev=6934.40, samples=10 00:45:20.089 iops : min= 140, max= 316, avg=235.60, stdev=54.18, samples=10 00:45:20.089 lat (msec) : 10=57.49%, 20=34.29%, 50=5.08%, 100=3.13% 00:45:20.089 cpu : usr=96.21%, sys=3.51%, ctx=8, majf=0, minf=69 00:45:20.089 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:20.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:20.089 issued rwts: total=1181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:20.089 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:20.089 00:45:20.089 Run status group 0 (all jobs): 00:45:20.089 READ: bw=79.9MiB/s (83.8MB/s), 24.5MiB/s-29.2MiB/s (25.7MB/s-30.7MB/s), io=404MiB (423MB), run=5010-5047msec 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 bdev_null0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 [2024-06-10 11:53:48.086490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 bdev_null1 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 bdev_null2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:20.089 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:20.090 { 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme$subsystem", 00:45:20.090 "trtype": "$TEST_TRANSPORT", 00:45:20.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "$NVMF_PORT", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.090 "hdgst": ${hdgst:-false}, 00:45:20.090 "ddgst": ${ddgst:-false} 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 } 00:45:20.090 EOF 00:45:20.090 )") 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:20.090 { 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme$subsystem", 00:45:20.090 "trtype": "$TEST_TRANSPORT", 00:45:20.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "$NVMF_PORT", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.090 "hdgst": ${hdgst:-false}, 00:45:20.090 "ddgst": ${ddgst:-false} 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 } 00:45:20.090 EOF 00:45:20.090 )") 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:20.090 { 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme$subsystem", 00:45:20.090 "trtype": "$TEST_TRANSPORT", 00:45:20.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "$NVMF_PORT", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:20.090 "hdgst": ${hdgst:-false}, 00:45:20.090 "ddgst": ${ddgst:-false} 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 } 00:45:20.090 EOF 00:45:20.090 )") 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme0", 00:45:20.090 "trtype": "tcp", 00:45:20.090 "traddr": "10.0.0.2", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "4420", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:20.090 "hdgst": false, 00:45:20.090 "ddgst": false 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 },{ 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme1", 00:45:20.090 "trtype": "tcp", 00:45:20.090 "traddr": "10.0.0.2", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "4420", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:20.090 "hdgst": false, 00:45:20.090 "ddgst": false 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 },{ 00:45:20.090 "params": { 00:45:20.090 "name": "Nvme2", 00:45:20.090 "trtype": "tcp", 00:45:20.090 "traddr": "10.0.0.2", 00:45:20.090 "adrfam": "ipv4", 00:45:20.090 "trsvcid": "4420", 00:45:20.090 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:20.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:20.090 "hdgst": false, 00:45:20.090 "ddgst": false 00:45:20.090 }, 00:45:20.090 "method": "bdev_nvme_attach_controller" 00:45:20.090 }' 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:20.090 11:53:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:20.090 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:20.090 ... 00:45:20.090 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:20.090 ... 00:45:20.090 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:20.090 ... 00:45:20.090 fio-3.35 00:45:20.090 Starting 24 threads 00:45:20.090 EAL: No free 2048 kB hugepages reported on node 1 00:45:32.389 00:45:32.389 filename0: (groupid=0, jobs=1): err= 0: pid=2542345: Mon Jun 10 11:53:59 2024 00:45:32.389 read: IOPS=501, BW=2006KiB/s (2054kB/s)(19.6MiB/10017msec) 00:45:32.389 slat (nsec): min=7902, max=77115, avg=18317.82, stdev=12793.83 00:45:32.389 clat (usec): min=4973, max=34246, avg=31739.10, stdev=3297.70 00:45:32.389 lat (usec): min=4989, max=34254, avg=31757.42, stdev=3297.42 00:45:32.389 clat percentiles (usec): 00:45:32.389 | 1.00th=[ 7308], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:45:32.389 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.389 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.389 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:45:32.389 | 99.99th=[34341] 00:45:32.389 bw ( KiB/s): min= 1916, max= 2432, per=4.19%, avg=2007.16, stdev=121.40, samples=19 00:45:32.389 iops : min= 479, max= 608, avg=501.79, stdev=30.35, samples=19 00:45:32.389 lat (msec) : 10=1.55%, 20=0.08%, 50=98.37% 00:45:32.389 cpu : usr=97.91%, sys=1.11%, ctx=39, majf=0, minf=26 00:45:32.389 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:32.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.389 filename0: (groupid=0, jobs=1): err= 0: pid=2542346: Mon Jun 10 11:53:59 2024 00:45:32.389 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10017msec) 00:45:32.389 slat (nsec): min=7850, max=99136, avg=35490.78, stdev=18818.67 00:45:32.389 clat (usec): min=16987, max=63716, avg=32031.21, stdev=2458.34 00:45:32.389 lat (usec): min=16996, max=63738, avg=32066.70, stdev=2458.01 00:45:32.389 clat percentiles (usec): 00:45:32.389 | 1.00th=[24773], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:45:32.389 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.389 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:45:32.389 | 99.00th=[35914], 99.50th=[53740], 99.90th=[63701], 99.95th=[63701], 00:45:32.389 | 99.99th=[63701] 00:45:32.389 bw ( KiB/s): min= 1763, max= 2048, per=4.12%, avg=1975.30, stdev=81.47, samples=20 00:45:32.389 iops : min= 440, max= 512, avg=493.75, stdev=20.44, samples=20 00:45:32.389 lat (msec) : 20=0.32%, 50=99.15%, 100=0.52% 00:45:32.389 cpu : usr=99.01%, sys=0.67%, ctx=72, majf=0, minf=22 00:45:32.389 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:32.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.389 filename0: (groupid=0, jobs=1): err= 0: pid=2542347: Mon Jun 10 11:53:59 2024 00:45:32.389 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10011msec) 00:45:32.389 slat (nsec): min=7911, max=91266, avg=18365.50, stdev=11405.02 00:45:32.389 clat (usec): min=13238, max=42410, avg=32126.63, stdev=1437.95 00:45:32.389 lat (usec): min=13247, max=42433, avg=32144.99, stdev=1437.91 00:45:32.389 clat percentiles (usec): 00:45:32.389 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:45:32.389 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:45:32.389 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.389 | 99.00th=[33817], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:45:32.389 | 99.99th=[42206] 00:45:32.389 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=77.69, samples=19 00:45:32.389 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:45:32.389 lat (msec) : 20=0.28%, 50=99.72% 00:45:32.389 cpu : usr=99.10%, sys=0.60%, ctx=8, majf=0, minf=19 00:45:32.389 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:32.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 issued rwts: total=4958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.389 filename0: (groupid=0, jobs=1): err= 0: pid=2542348: Mon Jun 10 11:53:59 2024 00:45:32.389 read: IOPS=501, BW=2004KiB/s (2052kB/s)(19.6MiB/10027msec) 00:45:32.389 slat (nsec): min=7927, max=94346, avg=15427.30, stdev=12105.44 00:45:32.389 clat (usec): min=5281, max=34301, avg=31797.65, stdev=2918.87 00:45:32.389 lat (usec): min=5299, max=34311, avg=31813.07, stdev=2918.22 00:45:32.389 clat percentiles (usec): 00:45:32.389 | 1.00th=[10552], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.389 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.389 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.389 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:45:32.389 | 99.99th=[34341] 00:45:32.389 bw ( KiB/s): min= 1912, max= 2304, per=4.18%, avg=2002.80, stdev=95.76, samples=20 00:45:32.389 iops : min= 478, max= 576, avg=500.70, stdev=23.94, samples=20 00:45:32.389 lat (msec) : 10=0.78%, 20=0.88%, 50=98.35% 00:45:32.389 cpu : usr=97.50%, sys=1.44%, ctx=56, majf=0, minf=34 00:45:32.389 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:32.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.389 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.389 filename0: (groupid=0, jobs=1): err= 0: pid=2542349: Mon Jun 10 11:53:59 2024 00:45:32.389 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10019msec) 00:45:32.389 slat (usec): min=7, max=114, avg=27.11, stdev=19.93 00:45:32.389 clat (usec): min=11200, max=57538, avg=31699.67, stdev=3730.72 00:45:32.389 lat (usec): min=11209, max=57561, avg=31726.79, stdev=3733.24 00:45:32.389 clat percentiles (usec): 00:45:32.389 | 1.00th=[20579], 5.00th=[23725], 10.00th=[28181], 20.00th=[31589], 00:45:32.389 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:45:32.389 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[34866], 00:45:32.389 | 99.00th=[42206], 99.50th=[52691], 99.90th=[57410], 99.95th=[57410], 00:45:32.389 | 99.99th=[57410] 00:45:32.389 bw ( KiB/s): min= 1792, max= 2160, per=4.18%, avg=2001.50, stdev=102.01, samples=20 00:45:32.389 iops : min= 448, max= 540, avg=500.30, stdev=25.42, samples=20 00:45:32.389 lat (msec) : 20=0.76%, 50=98.65%, 100=0.60% 00:45:32.389 cpu : usr=99.14%, sys=0.56%, ctx=8, majf=0, minf=25 00:45:32.389 IO depths : 1=4.3%, 2=8.7%, 4=18.7%, 8=59.4%, 16=8.9%, 32=0.0%, >=64=0.0% 00:45:32.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.389 complete : 0=0.0%, 4=92.5%, 8=2.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename0: (groupid=0, jobs=1): err= 0: pid=2542350: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10002msec) 00:45:32.390 slat (nsec): min=6452, max=96519, avg=34427.68, stdev=17370.89 00:45:32.390 clat (usec): min=2496, max=57796, avg=31938.33, stdev=2606.89 00:45:32.390 lat (usec): min=2504, max=57814, avg=31972.76, stdev=2607.91 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[25297], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.390 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.390 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:45:32.390 | 99.00th=[33817], 99.50th=[38011], 99.90th=[57934], 99.95th=[57934], 00:45:32.390 | 99.99th=[57934] 00:45:32.390 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1967.32, stdev=76.07, samples=19 00:45:32.390 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:45:32.390 lat (msec) : 4=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:45:32.390 cpu : usr=99.02%, sys=0.59%, ctx=51, majf=0, minf=21 00:45:32.390 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename0: (groupid=0, jobs=1): err= 0: pid=2542351: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10003msec) 00:45:32.390 slat (usec): min=6, max=101, avg=37.08, stdev=18.79 00:45:32.390 clat (usec): min=2026, max=64742, avg=32001.42, stdev=2180.15 00:45:32.390 lat (usec): min=2035, max=64758, avg=32038.50, stdev=2180.35 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[29754], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.390 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.390 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:45:32.390 | 99.00th=[33424], 99.50th=[35914], 99.90th=[57410], 99.95th=[57410], 00:45:32.390 | 99.99th=[64750] 00:45:32.390 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1974.05, stdev=77.30, samples=19 00:45:32.390 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:45:32.390 lat (msec) : 4=0.10%, 20=0.32%, 50=99.25%, 100=0.32% 00:45:32.390 cpu : usr=98.11%, sys=1.04%, ctx=686, majf=0, minf=22 00:45:32.390 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename0: (groupid=0, jobs=1): err= 0: pid=2542352: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10014msec) 00:45:32.390 slat (nsec): min=7851, max=71024, avg=15863.17, stdev=9177.73 00:45:32.390 clat (usec): min=21740, max=64749, avg=32272.09, stdev=1560.38 00:45:32.390 lat (usec): min=21749, max=64782, avg=32287.95, stdev=1560.39 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[30016], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.390 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.390 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.390 | 99.00th=[34341], 99.50th=[41157], 99.90th=[51643], 99.95th=[51643], 00:45:32.390 | 99.99th=[64750] 00:45:32.390 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.00, stdev=75.26, samples=19 00:45:32.390 iops : min= 448, max= 512, avg=493.21, stdev=18.78, samples=19 00:45:32.390 lat (msec) : 50=99.68%, 100=0.32% 00:45:32.390 cpu : usr=99.28%, sys=0.41%, ctx=17, majf=0, minf=21 00:45:32.390 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename1: (groupid=0, jobs=1): err= 0: pid=2542353: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10009msec) 00:45:32.390 slat (usec): min=7, max=103, avg=22.25, stdev=17.04 00:45:32.390 clat (usec): min=11444, max=60330, avg=32364.17, stdev=5182.97 00:45:32.390 lat (usec): min=11458, max=60389, avg=32386.42, stdev=5184.25 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[16909], 5.00th=[22676], 10.00th=[27919], 20.00th=[31589], 00:45:32.390 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:45:32.390 | 70.00th=[32637], 80.00th=[32900], 90.00th=[38011], 95.00th=[41157], 00:45:32.390 | 99.00th=[47973], 99.50th=[52691], 99.90th=[60031], 99.95th=[60031], 00:45:32.390 | 99.99th=[60556] 00:45:32.390 bw ( KiB/s): min= 1840, max= 2112, per=4.11%, avg=1966.47, stdev=78.60, samples=19 00:45:32.390 iops : min= 460, max= 528, avg=491.58, stdev=19.68, samples=19 00:45:32.390 lat (msec) : 20=2.64%, 50=96.59%, 100=0.77% 00:45:32.390 cpu : usr=96.41%, sys=1.92%, ctx=102, majf=0, minf=21 00:45:32.390 IO depths : 1=1.6%, 2=3.6%, 4=11.5%, 8=71.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename1: (groupid=0, jobs=1): err= 0: pid=2542354: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:45:32.390 slat (nsec): min=7930, max=92531, avg=16457.71, stdev=10389.42 00:45:32.390 clat (usec): min=24282, max=42149, avg=32216.12, stdev=928.14 00:45:32.390 lat (usec): min=24290, max=42176, avg=32232.58, stdev=927.10 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:45:32.390 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:45:32.390 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.390 | 99.00th=[33817], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:45:32.390 | 99.99th=[42206] 00:45:32.390 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1972.63, stdev=63.87, samples=19 00:45:32.390 iops : min= 479, max= 512, avg=493.00, stdev=15.79, samples=19 00:45:32.390 lat (msec) : 50=100.00% 00:45:32.390 cpu : usr=99.15%, sys=0.54%, ctx=13, majf=0, minf=21 00:45:32.390 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename1: (groupid=0, jobs=1): err= 0: pid=2542356: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=483, BW=1936KiB/s (1982kB/s)(18.9MiB/10002msec) 00:45:32.390 slat (nsec): min=7851, max=85288, avg=22305.48, stdev=15089.25 00:45:32.390 clat (usec): min=5805, max=58019, avg=32922.08, stdev=4311.14 00:45:32.390 lat (usec): min=5813, max=58038, avg=32944.39, stdev=4311.57 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[22938], 5.00th=[29754], 10.00th=[31589], 20.00th=[31851], 00:45:32.390 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:45:32.390 | 70.00th=[32637], 80.00th=[32900], 90.00th=[35390], 95.00th=[41157], 00:45:32.390 | 99.00th=[49546], 99.50th=[53740], 99.90th=[57934], 99.95th=[57934], 00:45:32.390 | 99.99th=[57934] 00:45:32.390 bw ( KiB/s): min= 1795, max= 2048, per=4.02%, avg=1923.53, stdev=59.88, samples=19 00:45:32.390 iops : min= 448, max= 512, avg=480.84, stdev=15.06, samples=19 00:45:32.390 lat (msec) : 10=0.21%, 20=0.37%, 50=98.49%, 100=0.93% 00:45:32.390 cpu : usr=99.10%, sys=0.59%, ctx=14, majf=0, minf=25 00:45:32.390 IO depths : 1=1.9%, 2=3.8%, 4=9.6%, 8=71.2%, 16=13.6%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=90.8%, 8=6.3%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename1: (groupid=0, jobs=1): err= 0: pid=2542357: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10002msec) 00:45:32.390 slat (nsec): min=6060, max=91337, avg=33085.70, stdev=16867.78 00:45:32.390 clat (usec): min=9810, max=65191, avg=32080.25, stdev=2474.77 00:45:32.390 lat (usec): min=9818, max=65209, avg=32113.34, stdev=2474.91 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[25822], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:45:32.390 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.390 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32637], 95.00th=[32900], 00:45:32.390 | 99.00th=[38011], 99.50th=[45876], 99.90th=[57934], 99.95th=[65274], 00:45:32.390 | 99.99th=[65274] 00:45:32.390 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1969.84, stdev=72.51, samples=19 00:45:32.390 iops : min= 448, max= 512, avg=492.42, stdev=18.23, samples=19 00:45:32.390 lat (msec) : 10=0.12%, 20=0.40%, 50=99.15%, 100=0.32% 00:45:32.390 cpu : usr=99.02%, sys=0.68%, ctx=15, majf=0, minf=22 00:45:32.390 IO depths : 1=4.0%, 2=9.9%, 4=24.0%, 8=53.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:45:32.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.390 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.390 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.390 filename1: (groupid=0, jobs=1): err= 0: pid=2542358: Mon Jun 10 11:53:59 2024 00:45:32.390 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10020msec) 00:45:32.390 slat (nsec): min=7933, max=92162, avg=20542.62, stdev=12386.00 00:45:32.390 clat (usec): min=20949, max=58177, avg=32270.84, stdev=1819.64 00:45:32.390 lat (usec): min=20958, max=58199, avg=32291.39, stdev=1819.26 00:45:32.390 clat percentiles (usec): 00:45:32.390 | 1.00th=[26608], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.390 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.390 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:45:32.390 | 99.00th=[37487], 99.50th=[38536], 99.90th=[57934], 99.95th=[57934], 00:45:32.390 | 99.99th=[57934] 00:45:32.390 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1970.40, stdev=72.99, samples=20 00:45:32.391 iops : min= 448, max= 512, avg=492.45, stdev=18.23, samples=20 00:45:32.391 lat (msec) : 50=99.68%, 100=0.32% 00:45:32.391 cpu : usr=97.25%, sys=1.52%, ctx=198, majf=0, minf=24 00:45:32.391 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename1: (groupid=0, jobs=1): err= 0: pid=2542359: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10014msec) 00:45:32.391 slat (nsec): min=7945, max=69094, avg=13770.84, stdev=9582.70 00:45:32.391 clat (usec): min=21952, max=51773, avg=32294.80, stdev=1328.14 00:45:32.391 lat (usec): min=21962, max=51806, avg=32308.57, stdev=1327.75 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.391 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.391 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.391 | 99.00th=[33817], 99.50th=[34341], 99.90th=[51643], 99.95th=[51643], 00:45:32.391 | 99.99th=[51643] 00:45:32.391 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.00, stdev=77.91, samples=19 00:45:32.391 iops : min= 448, max= 512, avg=493.21, stdev=19.44, samples=19 00:45:32.391 lat (msec) : 50=99.68%, 100=0.32% 00:45:32.391 cpu : usr=98.94%, sys=0.68%, ctx=61, majf=0, minf=29 00:45:32.391 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename1: (groupid=0, jobs=1): err= 0: pid=2542360: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10016msec) 00:45:32.391 slat (nsec): min=7901, max=95256, avg=15740.44, stdev=12196.01 00:45:32.391 clat (usec): min=6282, max=41540, avg=32079.43, stdev=1926.73 00:45:32.391 lat (usec): min=6295, max=41563, avg=32095.17, stdev=1926.29 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[23987], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:45:32.391 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.391 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.391 | 99.00th=[33817], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:45:32.391 | 99.99th=[41681] 00:45:32.391 bw ( KiB/s): min= 1916, max= 2048, per=4.15%, avg=1986.89, stdev=65.65, samples=19 00:45:32.391 iops : min= 479, max= 512, avg=496.68, stdev=16.38, samples=19 00:45:32.391 lat (msec) : 10=0.14%, 20=0.50%, 50=99.36% 00:45:32.391 cpu : usr=97.23%, sys=1.54%, ctx=84, majf=0, minf=22 00:45:32.391 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename1: (groupid=0, jobs=1): err= 0: pid=2542361: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.4MiB/10007msec) 00:45:32.391 slat (usec): min=4, max=111, avg= 9.53, stdev= 4.11 00:45:32.391 clat (usec): min=2809, max=41984, avg=26714.22, stdev=6216.56 00:45:32.391 lat (usec): min=2817, max=41991, avg=26723.75, stdev=6217.08 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[ 6915], 5.00th=[18482], 10.00th=[18482], 20.00th=[20579], 00:45:32.391 | 30.00th=[21103], 40.00th=[26346], 50.00th=[31065], 60.00th=[31589], 00:45:32.391 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:45:32.391 | 99.00th=[32900], 99.50th=[33817], 99.90th=[35390], 99.95th=[42206], 00:45:32.391 | 99.99th=[42206] 00:45:32.391 bw ( KiB/s): min= 1916, max= 3456, per=5.03%, avg=2409.05, stdev=521.99, samples=19 00:45:32.391 iops : min= 479, max= 864, avg=602.26, stdev=130.50, samples=19 00:45:32.391 lat (msec) : 4=0.27%, 10=1.05%, 20=16.48%, 50=82.20% 00:45:32.391 cpu : usr=99.09%, sys=0.58%, ctx=42, majf=0, minf=25 00:45:32.391 IO depths : 1=3.1%, 2=6.3%, 4=15.7%, 8=65.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename2: (groupid=0, jobs=1): err= 0: pid=2542362: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:45:32.391 slat (usec): min=6, max=105, avg=33.32, stdev=18.21 00:45:32.391 clat (usec): min=13149, max=58818, avg=32046.04, stdev=1977.30 00:45:32.391 lat (usec): min=13177, max=58835, avg=32079.37, stdev=1977.64 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.391 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.391 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:45:32.391 | 99.00th=[33424], 99.50th=[35914], 99.90th=[58983], 99.95th=[58983], 00:45:32.391 | 99.99th=[58983] 00:45:32.391 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=77.69, samples=19 00:45:32.391 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:45:32.391 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:45:32.391 cpu : usr=98.20%, sys=1.03%, ctx=57, majf=0, minf=20 00:45:32.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename2: (groupid=0, jobs=1): err= 0: pid=2542363: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10024msec) 00:45:32.391 slat (usec): min=7, max=108, avg=13.17, stdev= 8.48 00:45:32.391 clat (usec): min=6848, max=35120, avg=31976.21, stdev=2416.53 00:45:32.391 lat (usec): min=6866, max=35129, avg=31989.38, stdev=2416.17 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[13960], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.391 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.391 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:45:32.391 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:45:32.391 | 99.99th=[34866] 00:45:32.391 bw ( KiB/s): min= 1916, max= 2232, per=4.16%, avg=1993.00, stdev=85.30, samples=20 00:45:32.391 iops : min= 479, max= 558, avg=498.25, stdev=21.32, samples=20 00:45:32.391 lat (msec) : 10=0.54%, 20=0.56%, 50=98.90% 00:45:32.391 cpu : usr=98.40%, sys=0.87%, ctx=35, majf=0, minf=20 00:45:32.391 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename2: (groupid=0, jobs=1): err= 0: pid=2542364: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10014msec) 00:45:32.391 slat (nsec): min=7528, max=72873, avg=18087.59, stdev=12608.19 00:45:32.391 clat (usec): min=13358, max=56109, avg=31917.80, stdev=3087.93 00:45:32.391 lat (usec): min=13371, max=56118, avg=31935.89, stdev=3088.69 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[20055], 5.00th=[27395], 10.00th=[31327], 20.00th=[31589], 00:45:32.391 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:45:32.391 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.391 | 99.00th=[42206], 99.50th=[53216], 99.90th=[55313], 99.95th=[55837], 00:45:32.391 | 99.99th=[56361] 00:45:32.391 bw ( KiB/s): min= 1900, max= 2219, per=4.16%, avg=1993.79, stdev=88.69, samples=19 00:45:32.391 iops : min= 475, max= 554, avg=498.37, stdev=22.02, samples=19 00:45:32.391 lat (msec) : 20=0.92%, 50=98.50%, 100=0.58% 00:45:32.391 cpu : usr=97.86%, sys=1.12%, ctx=39, majf=0, minf=23 00:45:32.391 IO depths : 1=3.3%, 2=9.1%, 4=23.6%, 8=54.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename2: (groupid=0, jobs=1): err= 0: pid=2542365: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:45:32.391 slat (nsec): min=6294, max=91607, avg=33230.25, stdev=16585.25 00:45:32.391 clat (usec): min=13000, max=59052, avg=32063.47, stdev=2003.19 00:45:32.391 lat (usec): min=13017, max=59070, avg=32096.70, stdev=2002.68 00:45:32.391 clat percentiles (usec): 00:45:32.391 | 1.00th=[30278], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.391 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.391 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32637], 95.00th=[32900], 00:45:32.391 | 99.00th=[33817], 99.50th=[35914], 99.90th=[58983], 99.95th=[58983], 00:45:32.391 | 99.99th=[58983] 00:45:32.391 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:45:32.391 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:45:32.391 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:45:32.391 cpu : usr=99.18%, sys=0.53%, ctx=9, majf=0, minf=25 00:45:32.391 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:32.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.391 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.391 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.391 filename2: (groupid=0, jobs=1): err= 0: pid=2542366: Mon Jun 10 11:53:59 2024 00:45:32.391 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10007msec) 00:45:32.391 slat (nsec): min=7947, max=99818, avg=24226.58, stdev=18121.90 00:45:32.391 clat (usec): min=25400, max=46019, avg=32203.11, stdev=1048.86 00:45:32.391 lat (usec): min=25410, max=46050, avg=32227.34, stdev=1047.44 00:45:32.392 clat percentiles (usec): 00:45:32.392 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.392 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.392 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:45:32.392 | 99.00th=[33817], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:45:32.392 | 99.99th=[45876] 00:45:32.392 bw ( KiB/s): min= 1792, max= 2052, per=4.12%, avg=1974.84, stdev=78.11, samples=19 00:45:32.392 iops : min= 448, max= 513, avg=493.63, stdev=19.59, samples=19 00:45:32.392 lat (msec) : 50=100.00% 00:45:32.392 cpu : usr=98.98%, sys=0.70%, ctx=31, majf=0, minf=22 00:45:32.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:32.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.392 filename2: (groupid=0, jobs=1): err= 0: pid=2542367: Mon Jun 10 11:53:59 2024 00:45:32.392 read: IOPS=497, BW=1991KiB/s (2039kB/s)(19.5MiB/10017msec) 00:45:32.392 slat (usec): min=7, max=113, avg=16.53, stdev=13.52 00:45:32.392 clat (usec): min=13894, max=56274, avg=32004.72, stdev=2573.00 00:45:32.392 lat (usec): min=13908, max=56295, avg=32021.24, stdev=2572.29 00:45:32.392 clat percentiles (usec): 00:45:32.392 | 1.00th=[21627], 5.00th=[28705], 10.00th=[31327], 20.00th=[31589], 00:45:32.392 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:45:32.392 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.392 | 99.00th=[36963], 99.50th=[46400], 99.90th=[56361], 99.95th=[56361], 00:45:32.392 | 99.99th=[56361] 00:45:32.392 bw ( KiB/s): min= 1795, max= 2272, per=4.15%, avg=1987.25, stdev=102.71, samples=20 00:45:32.392 iops : min= 448, max= 568, avg=496.70, stdev=25.71, samples=20 00:45:32.392 lat (msec) : 20=0.20%, 50=99.36%, 100=0.44% 00:45:32.392 cpu : usr=99.29%, sys=0.42%, ctx=9, majf=0, minf=25 00:45:32.392 IO depths : 1=5.6%, 2=11.2%, 4=22.9%, 8=53.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:45:32.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.392 filename2: (groupid=0, jobs=1): err= 0: pid=2542368: Mon Jun 10 11:53:59 2024 00:45:32.392 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:45:32.392 slat (usec): min=6, max=100, avg=35.51, stdev=17.77 00:45:32.392 clat (usec): min=13037, max=61561, avg=32034.66, stdev=2111.75 00:45:32.392 lat (usec): min=13050, max=61578, avg=32070.17, stdev=2111.51 00:45:32.392 clat percentiles (usec): 00:45:32.392 | 1.00th=[30016], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:45:32.392 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:45:32.392 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:45:32.392 | 99.00th=[33424], 99.50th=[35914], 99.90th=[61604], 99.95th=[61604], 00:45:32.392 | 99.99th=[61604] 00:45:32.392 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:45:32.392 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:45:32.392 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:45:32.392 cpu : usr=99.08%, sys=0.63%, ctx=16, majf=0, minf=24 00:45:32.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:32.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.392 filename2: (groupid=0, jobs=1): err= 0: pid=2542369: Mon Jun 10 11:53:59 2024 00:45:32.392 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10016msec) 00:45:32.392 slat (nsec): min=7503, max=58376, avg=9654.16, stdev=3971.73 00:45:32.392 clat (usec): min=23619, max=55911, avg=32331.14, stdev=1677.46 00:45:32.392 lat (usec): min=23628, max=55932, avg=32340.79, stdev=1677.31 00:45:32.392 clat percentiles (usec): 00:45:32.392 | 1.00th=[30016], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:45:32.392 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:45:32.392 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:45:32.392 | 99.00th=[36439], 99.50th=[39060], 99.90th=[55837], 99.95th=[55837], 00:45:32.392 | 99.99th=[55837] 00:45:32.392 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1970.50, stdev=76.39, samples=20 00:45:32.392 iops : min= 448, max= 512, avg=492.55, stdev=19.15, samples=20 00:45:32.392 lat (msec) : 50=99.68%, 100=0.32% 00:45:32.392 cpu : usr=98.97%, sys=0.69%, ctx=25, majf=0, minf=18 00:45:32.392 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:45:32.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:32.392 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:32.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:32.392 00:45:32.392 Run status group 0 (all jobs): 00:45:32.392 READ: bw=46.8MiB/s (49.0MB/s), 1936KiB/s-2390KiB/s (1982kB/s-2447kB/s), io=469MiB (492MB), run=10001-10027msec 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 bdev_null0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.392 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.393 [2024-06-10 11:53:59.891711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.393 bdev_null1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:32.393 { 00:45:32.393 "params": { 00:45:32.393 "name": "Nvme$subsystem", 00:45:32.393 "trtype": "$TEST_TRANSPORT", 00:45:32.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:32.393 "adrfam": "ipv4", 00:45:32.393 "trsvcid": "$NVMF_PORT", 00:45:32.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:32.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:32.393 "hdgst": ${hdgst:-false}, 00:45:32.393 "ddgst": ${ddgst:-false} 00:45:32.393 }, 00:45:32.393 "method": "bdev_nvme_attach_controller" 00:45:32.393 } 00:45:32.393 EOF 00:45:32.393 )") 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:32.393 { 00:45:32.393 "params": { 00:45:32.393 "name": "Nvme$subsystem", 00:45:32.393 "trtype": "$TEST_TRANSPORT", 00:45:32.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:32.393 "adrfam": "ipv4", 00:45:32.393 "trsvcid": "$NVMF_PORT", 00:45:32.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:32.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:32.393 "hdgst": ${hdgst:-false}, 00:45:32.393 "ddgst": ${ddgst:-false} 00:45:32.393 }, 00:45:32.393 "method": "bdev_nvme_attach_controller" 00:45:32.393 } 00:45:32.393 EOF 00:45:32.393 )") 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:32.393 "params": { 00:45:32.393 "name": "Nvme0", 00:45:32.393 "trtype": "tcp", 00:45:32.393 "traddr": "10.0.0.2", 00:45:32.393 "adrfam": "ipv4", 00:45:32.393 "trsvcid": "4420", 00:45:32.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:32.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:32.393 "hdgst": false, 00:45:32.393 "ddgst": false 00:45:32.393 }, 00:45:32.393 "method": "bdev_nvme_attach_controller" 00:45:32.393 },{ 00:45:32.393 "params": { 00:45:32.393 "name": "Nvme1", 00:45:32.393 "trtype": "tcp", 00:45:32.393 "traddr": "10.0.0.2", 00:45:32.393 "adrfam": "ipv4", 00:45:32.393 "trsvcid": "4420", 00:45:32.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:32.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:32.393 "hdgst": false, 00:45:32.393 "ddgst": false 00:45:32.393 }, 00:45:32.393 "method": "bdev_nvme_attach_controller" 00:45:32.393 }' 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:45:32.393 11:53:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:32.393 11:54:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:32.393 11:54:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:32.393 11:54:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:32.393 11:54:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:32.393 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:32.393 ... 00:45:32.393 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:32.393 ... 00:45:32.393 fio-3.35 00:45:32.393 Starting 4 threads 00:45:32.393 EAL: No free 2048 kB hugepages reported on node 1 00:45:37.691 00:45:37.691 filename0: (groupid=0, jobs=1): err= 0: pid=2544632: Mon Jun 10 11:54:06 2024 00:45:37.691 read: IOPS=2060, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5002msec) 00:45:37.691 slat (nsec): min=7840, max=40937, avg=8968.49, stdev=3436.33 00:45:37.691 clat (usec): min=1334, max=46161, avg=3858.35, stdev=1345.98 00:45:37.691 lat (usec): min=1342, max=46186, avg=3867.32, stdev=1345.98 00:45:37.691 clat percentiles (usec): 00:45:37.691 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3425], 00:45:37.691 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3720], 00:45:37.691 | 70.00th=[ 3752], 80.00th=[ 3949], 90.00th=[ 5080], 95.00th=[ 5407], 00:45:37.691 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6783], 99.95th=[45876], 00:45:37.691 | 99.99th=[46400] 00:45:37.691 bw ( KiB/s): min=15086, max=17056, per=24.42%, avg=16428.22, stdev=585.98, samples=9 00:45:37.691 iops : min= 1885, max= 2132, avg=2053.44, stdev=73.46, samples=9 00:45:37.691 lat (msec) : 2=0.14%, 4=80.49%, 10=19.30%, 50=0.08% 00:45:37.691 cpu : usr=96.46%, sys=3.24%, ctx=8, majf=0, minf=9 00:45:37.691 IO depths : 1=0.1%, 2=0.6%, 4=71.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:37.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.691 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.691 issued rwts: total=10306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:37.691 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:37.691 filename0: (groupid=0, jobs=1): err= 0: pid=2544633: Mon Jun 10 11:54:06 2024 00:45:37.691 read: IOPS=2134, BW=16.7MiB/s (17.5MB/s)(83.4MiB/5003msec) 00:45:37.691 slat (nsec): min=7846, max=64034, avg=9035.20, stdev=3580.06 00:45:37.691 clat (usec): min=1978, max=6353, avg=3723.63, stdev=564.90 00:45:37.691 lat (usec): min=1986, max=6366, avg=3732.67, stdev=564.98 00:45:37.691 clat percentiles (usec): 00:45:37.692 | 1.00th=[ 2507], 5.00th=[ 2999], 10.00th=[ 3228], 20.00th=[ 3425], 00:45:37.692 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3720], 00:45:37.692 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 4490], 95.00th=[ 5145], 00:45:37.692 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6128], 99.95th=[ 6259], 00:45:37.692 | 99.99th=[ 6325] 00:45:37.692 bw ( KiB/s): min=16624, max=17520, per=25.39%, avg=17080.00, stdev=244.32, samples=10 00:45:37.692 iops : min= 2078, max= 2190, avg=2135.00, stdev=30.54, samples=10 00:45:37.692 lat (msec) : 2=0.03%, 4=84.48%, 10=15.49% 00:45:37.692 cpu : usr=96.50%, sys=3.18%, ctx=6, majf=0, minf=1 00:45:37.692 IO depths : 1=0.1%, 2=1.0%, 4=70.2%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:37.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 issued rwts: total=10678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:37.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:37.692 filename1: (groupid=0, jobs=1): err= 0: pid=2544634: Mon Jun 10 11:54:06 2024 00:45:37.692 read: IOPS=2105, BW=16.4MiB/s (17.2MB/s)(82.3MiB/5001msec) 00:45:37.692 slat (nsec): min=7844, max=68692, avg=9082.91, stdev=3749.02 00:45:37.692 clat (usec): min=1623, max=6579, avg=3774.43, stdev=550.12 00:45:37.692 lat (usec): min=1631, max=6588, avg=3783.51, stdev=549.99 00:45:37.692 clat percentiles (usec): 00:45:37.692 | 1.00th=[ 2835], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3458], 00:45:37.692 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:45:37.692 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4490], 95.00th=[ 5145], 00:45:37.692 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 6390], 99.95th=[ 6521], 00:45:37.692 | 99.99th=[ 6587] 00:45:37.692 bw ( KiB/s): min=16592, max=17360, per=25.11%, avg=16892.44, stdev=307.33, samples=9 00:45:37.692 iops : min= 2074, max= 2170, avg=2111.56, stdev=38.42, samples=9 00:45:37.692 lat (msec) : 2=0.03%, 4=83.51%, 10=16.46% 00:45:37.692 cpu : usr=96.58%, sys=3.10%, ctx=10, majf=0, minf=9 00:45:37.692 IO depths : 1=0.2%, 2=0.9%, 4=72.1%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:37.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 issued rwts: total=10529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:37.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:37.692 filename1: (groupid=0, jobs=1): err= 0: pid=2544635: Mon Jun 10 11:54:06 2024 00:45:37.692 read: IOPS=2109, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5002msec) 00:45:37.692 slat (nsec): min=7832, max=63594, avg=8944.59, stdev=3373.56 00:45:37.692 clat (usec): min=1502, max=6603, avg=3767.29, stdev=602.00 00:45:37.692 lat (usec): min=1518, max=6615, avg=3776.24, stdev=601.97 00:45:37.692 clat percentiles (usec): 00:45:37.692 | 1.00th=[ 2704], 5.00th=[ 3097], 10.00th=[ 3228], 20.00th=[ 3425], 00:45:37.692 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:45:37.692 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4686], 95.00th=[ 5211], 00:45:37.692 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 6259], 99.95th=[ 6259], 00:45:37.692 | 99.99th=[ 6587] 00:45:37.692 bw ( KiB/s): min=16000, max=17328, per=25.09%, avg=16880.20, stdev=379.79, samples=10 00:45:37.692 iops : min= 2000, max= 2166, avg=2110.00, stdev=47.46, samples=10 00:45:37.692 lat (msec) : 2=0.10%, 4=83.20%, 10=16.70% 00:45:37.692 cpu : usr=96.46%, sys=3.22%, ctx=9, majf=0, minf=0 00:45:37.692 IO depths : 1=0.1%, 2=0.6%, 4=71.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:37.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:37.692 issued rwts: total=10553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:37.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:37.692 00:45:37.692 Run status group 0 (all jobs): 00:45:37.692 READ: bw=65.7MiB/s (68.9MB/s), 16.1MiB/s-16.7MiB/s (16.9MB/s-17.5MB/s), io=329MiB (345MB), run=5001-5003msec 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 00:45:37.692 real 0m24.402s 00:45:37.692 user 5m20.307s 00:45:37.692 sys 0m4.291s 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 ************************************ 00:45:37.692 END TEST fio_dif_rand_params 00:45:37.692 ************************************ 00:45:37.692 11:54:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:37.692 11:54:06 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:37.692 11:54:06 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 ************************************ 00:45:37.692 START TEST fio_dif_digest 00:45:37.692 ************************************ 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 bdev_null0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.692 [2024-06-10 11:54:06.432649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.692 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:37.693 { 00:45:37.693 "params": { 00:45:37.693 "name": "Nvme$subsystem", 00:45:37.693 "trtype": "$TEST_TRANSPORT", 00:45:37.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.693 "adrfam": "ipv4", 00:45:37.693 "trsvcid": "$NVMF_PORT", 00:45:37.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.693 "hdgst": ${hdgst:-false}, 00:45:37.693 "ddgst": ${ddgst:-false} 00:45:37.693 }, 00:45:37.693 "method": "bdev_nvme_attach_controller" 00:45:37.693 } 00:45:37.693 EOF 00:45:37.693 )") 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:37.693 "params": { 00:45:37.693 "name": "Nvme0", 00:45:37.693 "trtype": "tcp", 00:45:37.693 "traddr": "10.0.0.2", 00:45:37.693 "adrfam": "ipv4", 00:45:37.693 "trsvcid": "4420", 00:45:37.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.693 "hdgst": true, 00:45:37.693 "ddgst": true 00:45:37.693 }, 00:45:37.693 "method": "bdev_nvme_attach_controller" 00:45:37.693 }' 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:37.693 11:54:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.954 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:37.954 ... 00:45:37.954 fio-3.35 00:45:37.954 Starting 3 threads 00:45:37.954 EAL: No free 2048 kB hugepages reported on node 1 00:45:50.193 00:45:50.193 filename0: (groupid=0, jobs=1): err= 0: pid=2546203: Mon Jun 10 11:54:17 2024 00:45:50.193 read: IOPS=188, BW=23.6MiB/s (24.8MB/s)(237MiB/10051msec) 00:45:50.193 slat (usec): min=8, max=118, avg= 9.71, stdev= 3.53 00:45:50.193 clat (usec): min=11800, max=58838, avg=15843.23, stdev=4353.85 00:45:50.193 lat (usec): min=11809, max=58847, avg=15852.94, stdev=4353.80 00:45:50.193 clat percentiles (usec): 00:45:50.193 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:45:50.193 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:45:50.193 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:45:50.193 | 99.00th=[53216], 99.50th=[56886], 99.90th=[58459], 99.95th=[58983], 00:45:50.193 | 99.99th=[58983] 00:45:50.193 bw ( KiB/s): min=22528, max=26112, per=28.55%, avg=24281.60, stdev=1108.98, samples=20 00:45:50.193 iops : min= 176, max= 204, avg=189.70, stdev= 8.66, samples=20 00:45:50.193 lat (msec) : 20=98.95%, 100=1.05% 00:45:50.193 cpu : usr=85.13%, sys=8.90%, ctx=370, majf=0, minf=143 00:45:50.193 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.193 issued rwts: total=1899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.193 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.193 filename0: (groupid=0, jobs=1): err= 0: pid=2546204: Mon Jun 10 11:54:17 2024 00:45:50.193 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(301MiB/10046msec) 00:45:50.193 slat (usec): min=8, max=105, avg=10.11, stdev= 3.11 00:45:50.193 clat (usec): min=7947, max=53544, avg=12489.97, stdev=1573.49 00:45:50.193 lat (usec): min=7956, max=53557, avg=12500.08, stdev=1573.56 00:45:50.193 clat percentiles (usec): 00:45:50.193 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:45:50.193 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:45:50.193 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:45:50.193 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15664], 99.95th=[50594], 00:45:50.193 | 99.99th=[53740] 00:45:50.193 bw ( KiB/s): min=28928, max=32768, per=36.20%, avg=30786.90, stdev=1176.03, samples=20 00:45:50.193 iops : min= 226, max= 256, avg=240.50, stdev= 9.22, samples=20 00:45:50.193 lat (msec) : 10=3.03%, 20=96.88%, 100=0.08% 00:45:50.193 cpu : usr=93.15%, sys=4.97%, ctx=113, majf=0, minf=164 00:45:50.193 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.193 issued rwts: total=2407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.193 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.193 filename0: (groupid=0, jobs=1): err= 0: pid=2546205: Mon Jun 10 11:54:17 2024 00:45:50.193 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(297MiB/10048msec) 00:45:50.193 slat (nsec): min=8099, max=32909, avg=8956.02, stdev=1100.20 00:45:50.193 clat (usec): min=7883, max=51316, avg=12678.49, stdev=1502.78 00:45:50.193 lat (usec): min=7892, max=51325, avg=12687.44, stdev=1502.80 00:45:50.193 clat percentiles (usec): 00:45:50.193 | 1.00th=[ 8979], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:45:50.193 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:45:50.193 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14222], 00:45:50.193 | 99.00th=[15008], 99.50th=[15270], 99.90th=[16909], 99.95th=[47449], 00:45:50.193 | 99.99th=[51119] 00:45:50.193 bw ( KiB/s): min=28672, max=31488, per=35.67%, avg=30336.00, stdev=781.36, samples=20 00:45:50.193 iops : min= 224, max= 246, avg=237.00, stdev= 6.10, samples=20 00:45:50.193 lat (msec) : 10=2.57%, 20=97.34%, 50=0.04%, 100=0.04% 00:45:50.193 cpu : usr=96.15%, sys=3.54%, ctx=49, majf=0, minf=78 00:45:50.193 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.194 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.194 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.194 00:45:50.194 Run status group 0 (all jobs): 00:45:50.194 READ: bw=83.1MiB/s (87.1MB/s), 23.6MiB/s-29.9MiB/s (24.8MB/s-31.4MB/s), io=835MiB (875MB), run=10046-10051msec 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:50.194 00:45:50.194 real 0m11.118s 00:45:50.194 user 0m42.847s 00:45:50.194 sys 0m2.076s 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:50.194 11:54:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.194 ************************************ 00:45:50.194 END TEST fio_dif_digest 00:45:50.194 ************************************ 00:45:50.194 11:54:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:50.194 11:54:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:50.194 rmmod nvme_tcp 00:45:50.194 rmmod nvme_fabrics 00:45:50.194 rmmod nvme_keyring 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2535753 ']' 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2535753 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 2535753 ']' 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 2535753 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2535753 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2535753' 00:45:50.194 killing process with pid 2535753 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@968 -- # kill 2535753 00:45:50.194 11:54:17 nvmf_dif -- common/autotest_common.sh@973 -- # wait 2535753 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:45:50.194 11:54:17 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:52.741 Waiting for block devices as requested 00:45:52.741 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:52.741 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:53.001 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:53.001 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:53.001 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:53.262 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:53.262 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:53.262 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:53.522 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:53.522 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:53.522 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:53.522 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:53.522 11:54:22 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:53.522 11:54:22 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:53.522 11:54:22 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:53.522 11:54:22 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:53.522 11:54:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:53.522 11:54:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:53.522 11:54:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:56.070 11:54:24 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:56.070 00:45:56.070 real 1m15.523s 00:45:56.070 user 8m0.411s 00:45:56.070 sys 0m19.848s 00:45:56.070 11:54:24 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:56.070 11:54:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:56.070 ************************************ 00:45:56.070 END TEST nvmf_dif 00:45:56.070 ************************************ 00:45:56.070 11:54:24 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:56.070 11:54:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:45:56.070 11:54:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:56.070 11:54:24 -- common/autotest_common.sh@10 -- # set +x 00:45:56.070 ************************************ 00:45:56.070 START TEST nvmf_abort_qd_sizes 00:45:56.070 ************************************ 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:56.070 * Looking for test storage... 00:45:56.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.070 11:54:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:45:56.071 11:54:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:46:02.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:46:02.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:02.660 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:46:02.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:46:02.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:02.661 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:02.923 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:46:03.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:03.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:46:03.184 00:46:03.184 --- 10.0.0.2 ping statistics --- 00:46:03.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.184 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:03.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:03.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:46:03.184 00:46:03.184 --- 10.0.0.1 ping statistics --- 00:46:03.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:03.184 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:46:03.184 11:54:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:06.502 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:06.502 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:06.763 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2555864 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2555864 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 2555864 ']' 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:06.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:06.763 11:54:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.763 [2024-06-10 11:54:35.690560] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:06.763 [2024-06-10 11:54:35.690626] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:06.763 EAL: No free 2048 kB hugepages reported on node 1 00:46:07.023 [2024-06-10 11:54:35.760950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:07.023 [2024-06-10 11:54:35.837026] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:07.023 [2024-06-10 11:54:35.837065] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:07.023 [2024-06-10 11:54:35.837072] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:07.023 [2024-06-10 11:54:35.837079] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:07.024 [2024-06-10 11:54:35.837085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:07.024 [2024-06-10 11:54:35.837195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:07.024 [2024-06-10 11:54:35.837313] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:07.024 [2024-06-10 11:54:35.837464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:07.024 [2024-06-10 11:54:35.837464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:07.593 11:54:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:07.594 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:07.854 11:54:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:07.854 ************************************ 00:46:07.854 START TEST spdk_target_abort 00:46:07.854 ************************************ 00:46:07.854 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:46:07.854 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:07.854 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:07.854 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:07.854 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.116 spdk_targetn1 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.116 [2024-06-10 11:54:36.911731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.116 [2024-06-10 11:54:36.952019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:08.116 11:54:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:08.116 EAL: No free 2048 kB hugepages reported on node 1 00:46:08.116 [2024-06-10 11:54:37.081167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:296 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:08.117 [2024-06-10 11:54:37.081193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0027 p:1 m:0 dnr:0 00:46:08.377 [2024-06-10 11:54:37.092107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:720 len:8 PRP1 0x2000078be000 PRP2 0x0 00:46:08.377 [2024-06-10 11:54:37.092125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:46:08.377 [2024-06-10 11:54:37.100447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1032 len:8 PRP1 0x2000078be000 PRP2 0x0 00:46:08.377 [2024-06-10 11:54:37.100463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:46:08.377 [2024-06-10 11:54:37.159181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2944 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:46:08.377 [2024-06-10 11:54:37.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:46:11.679 Initializing NVMe Controllers 00:46:11.680 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:11.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:11.680 Initialization complete. Launching workers. 00:46:11.680 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11963, failed: 4 00:46:11.680 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3550, failed to submit 8417 00:46:11.680 success 753, unsuccess 2797, failed 0 00:46:11.680 11:54:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:11.680 11:54:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:11.680 EAL: No free 2048 kB hugepages reported on node 1 00:46:11.680 [2024-06-10 11:54:40.260926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:46:11.680 [2024-06-10 11:54:40.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:46:11.680 [2024-06-10 11:54:40.268828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:608 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:46:11.680 [2024-06-10 11:54:40.268852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0054 p:1 m:0 dnr:0 00:46:11.680 [2024-06-10 11:54:40.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1008 len:8 PRP1 0x200007c56000 PRP2 0x0 00:46:11.680 [2024-06-10 11:54:40.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:46:11.680 [2024-06-10 11:54:40.330154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2152 len:8 PRP1 0x200007c56000 PRP2 0x0 00:46:11.680 [2024-06-10 11:54:40.330180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:46:15.020 [2024-06-10 11:54:43.376078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1040 is same with the state(5) to be set 00:46:15.020 Initializing NVMe Controllers 00:46:15.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:15.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:15.020 Initialization complete. Launching workers. 00:46:15.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8704, failed: 4 00:46:15.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7479 00:46:15.020 success 349, unsuccess 880, failed 0 00:46:15.020 11:54:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:15.020 11:54:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:15.020 EAL: No free 2048 kB hugepages reported on node 1 00:46:16.964 [2024-06-10 11:54:45.510193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:219944 len:8 PRP1 0x2000078ee000 PRP2 0x0 00:46:16.964 [2024-06-10 11:54:45.510228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0027 p:1 m:0 dnr:0 00:46:17.535 [2024-06-10 11:54:46.327568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:164 nsid:1 lba:312488 len:8 PRP1 0x200007910000 PRP2 0x0 00:46:17.535 [2024-06-10 11:54:46.327591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:164 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:46:17.535 [2024-06-10 11:54:46.418488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:322472 len:8 PRP1 0x2000078f4000 PRP2 0x0 00:46:17.535 [2024-06-10 11:54:46.418508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:46:17.795 Initializing NVMe Controllers 00:46:17.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:17.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:17.795 Initialization complete. Launching workers. 00:46:17.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42184, failed: 3 00:46:17.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2562, failed to submit 39625 00:46:17.795 success 616, unsuccess 1946, failed 0 00:46:17.795 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:17.795 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:17.795 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:17.795 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:17.796 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:17.796 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:17.796 11:54:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2555864 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 2555864 ']' 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 2555864 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2555864 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2555864' 00:46:19.709 killing process with pid 2555864 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 2555864 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 2555864 00:46:19.709 00:46:19.709 real 0m12.019s 00:46:19.709 user 0m49.039s 00:46:19.709 sys 0m1.844s 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:19.709 11:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:19.709 ************************************ 00:46:19.709 END TEST spdk_target_abort 00:46:19.709 ************************************ 00:46:19.709 11:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:19.709 11:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:19.709 11:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:19.709 11:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:20.007 ************************************ 00:46:20.007 START TEST kernel_target_abort 00:46:20.007 ************************************ 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:20.007 11:54:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:23.311 Waiting for block devices as requested 00:46:23.312 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:23.312 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:23.312 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:23.312 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:23.312 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:23.312 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:23.572 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:23.572 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:23.572 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:23.833 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:23.833 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:23.833 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:24.094 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:24.094 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:24.094 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:24.094 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:24.355 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:24.355 No valid GPT data, bailing 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:24.355 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:46:24.355 00:46:24.355 Discovery Log Number of Records 2, Generation counter 2 00:46:24.355 =====Discovery Log Entry 0====== 00:46:24.355 trtype: tcp 00:46:24.355 adrfam: ipv4 00:46:24.355 subtype: current discovery subsystem 00:46:24.355 treq: not specified, sq flow control disable supported 00:46:24.355 portid: 1 00:46:24.355 trsvcid: 4420 00:46:24.355 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:24.355 traddr: 10.0.0.1 00:46:24.355 eflags: none 00:46:24.355 sectype: none 00:46:24.355 =====Discovery Log Entry 1====== 00:46:24.355 trtype: tcp 00:46:24.355 adrfam: ipv4 00:46:24.355 subtype: nvme subsystem 00:46:24.356 treq: not specified, sq flow control disable supported 00:46:24.356 portid: 1 00:46:24.356 trsvcid: 4420 00:46:24.356 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:24.356 traddr: 10.0.0.1 00:46:24.356 eflags: none 00:46:24.356 sectype: none 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:24.356 11:54:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.356 EAL: No free 2048 kB hugepages reported on node 1 00:46:27.658 Initializing NVMe Controllers 00:46:27.658 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:27.658 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:27.658 Initialization complete. Launching workers. 00:46:27.658 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57301, failed: 0 00:46:27.658 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57301, failed to submit 0 00:46:27.658 success 0, unsuccess 57301, failed 0 00:46:27.658 11:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:27.658 11:54:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:27.658 EAL: No free 2048 kB hugepages reported on node 1 00:46:30.961 Initializing NVMe Controllers 00:46:30.961 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:30.961 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:30.961 Initialization complete. Launching workers. 00:46:30.961 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98736, failed: 0 00:46:30.961 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24898, failed to submit 73838 00:46:30.961 success 0, unsuccess 24898, failed 0 00:46:30.961 11:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:30.961 11:54:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.961 EAL: No free 2048 kB hugepages reported on node 1 00:46:33.506 Initializing NVMe Controllers 00:46:33.506 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:33.506 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:33.506 Initialization complete. Launching workers. 00:46:33.506 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94834, failed: 0 00:46:33.506 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23710, failed to submit 71124 00:46:33.506 success 0, unsuccess 23710, failed 0 00:46:33.506 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:33.506 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:33.506 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:46:33.767 11:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:37.073 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:37.073 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:38.988 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:38.988 00:46:38.988 real 0m19.189s 00:46:38.988 user 0m8.610s 00:46:38.988 sys 0m5.774s 00:46:38.988 11:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:38.988 11:55:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:38.988 ************************************ 00:46:38.988 END TEST kernel_target_abort 00:46:38.988 ************************************ 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:38.988 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:38.988 rmmod nvme_tcp 00:46:39.249 rmmod nvme_fabrics 00:46:39.249 rmmod nvme_keyring 00:46:39.249 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:39.249 11:55:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2555864 ']' 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2555864 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 2555864 ']' 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 2555864 00:46:39.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2555864) - No such process 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 2555864 is not found' 00:46:39.249 Process with pid 2555864 is not found 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:46:39.249 11:55:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:42.548 Waiting for block devices as requested 00:46:42.548 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:42.548 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:42.548 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:42.548 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:42.548 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:42.808 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:42.808 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:42.808 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:43.068 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:43.068 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:43.068 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:43.328 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:43.328 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:43.328 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:43.587 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:43.587 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:43.587 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:43.587 11:55:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:46.132 11:55:14 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:46:46.132 00:46:46.132 real 0m49.915s 00:46:46.132 user 1m2.663s 00:46:46.132 sys 0m17.977s 00:46:46.132 11:55:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:46.132 11:55:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:46.132 ************************************ 00:46:46.132 END TEST nvmf_abort_qd_sizes 00:46:46.132 ************************************ 00:46:46.132 11:55:14 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:46.132 11:55:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:46.132 11:55:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:46.132 11:55:14 -- common/autotest_common.sh@10 -- # set +x 00:46:46.132 ************************************ 00:46:46.132 START TEST keyring_file 00:46:46.132 ************************************ 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:46.132 * Looking for test storage... 00:46:46.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:46.132 11:55:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:46.132 11:55:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:46.132 11:55:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:46.132 11:55:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.132 11:55:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.132 11:55:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.132 11:55:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:46.132 11:55:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iz1bjr76UR 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iz1bjr76UR 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iz1bjr76UR 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iz1bjr76UR 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jFIanM83kS 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:46.132 11:55:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jFIanM83kS 00:46:46.132 11:55:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jFIanM83kS 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jFIanM83kS 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=2565951 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2565951 00:46:46.132 11:55:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2565951 ']' 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:46.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:46.132 11:55:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:46.132 [2024-06-10 11:55:14.942075] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:46.132 [2024-06-10 11:55:14.942154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565951 ] 00:46:46.132 EAL: No free 2048 kB hugepages reported on node 1 00:46:46.132 [2024-06-10 11:55:15.007504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:46.132 [2024-06-10 11:55:15.084391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:46:47.075 11:55:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.075 [2024-06-10 11:55:15.807047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:47.075 null0 00:46:47.075 [2024-06-10 11:55:15.839098] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:47.075 [2024-06-10 11:55:15.839383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:47.075 [2024-06-10 11:55:15.847113] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:47.075 11:55:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.075 [2024-06-10 11:55:15.863164] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:47.075 request: 00:46:47.075 { 00:46:47.075 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:47.075 "secure_channel": false, 00:46:47.075 "listen_address": { 00:46:47.075 "trtype": "tcp", 00:46:47.075 "traddr": "127.0.0.1", 00:46:47.075 "trsvcid": "4420" 00:46:47.075 }, 00:46:47.075 "method": "nvmf_subsystem_add_listener", 00:46:47.075 "req_id": 1 00:46:47.075 } 00:46:47.075 Got JSON-RPC error response 00:46:47.075 response: 00:46:47.075 { 00:46:47.075 "code": -32602, 00:46:47.075 "message": "Invalid parameters" 00:46:47.075 } 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:47.075 11:55:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=2566285 00:46:47.075 11:55:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2566285 /var/tmp/bperf.sock 00:46:47.075 11:55:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2566285 ']' 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:47.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:47.075 11:55:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.075 [2024-06-10 11:55:15.917604] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:47.075 [2024-06-10 11:55:15.917654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566285 ] 00:46:47.075 EAL: No free 2048 kB hugepages reported on node 1 00:46:47.075 [2024-06-10 11:55:15.975227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:47.075 [2024-06-10 11:55:16.039326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:47.335 11:55:16 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:47.335 11:55:16 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:46:47.335 11:55:16 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:47.336 11:55:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:47.596 11:55:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jFIanM83kS 00:46:47.596 11:55:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jFIanM83kS 00:46:47.596 11:55:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:46:47.596 11:55:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:46:47.596 11:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:47.596 11:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:47.596 11:55:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.937 11:55:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.iz1bjr76UR == \/\t\m\p\/\t\m\p\.\i\z\1\b\j\r\7\6\U\R ]] 00:46:47.938 11:55:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:46:47.938 11:55:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:47.938 11:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:47.938 11:55:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.938 11:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:48.223 11:55:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.jFIanM83kS == \/\t\m\p\/\t\m\p\.\j\F\I\a\n\M\8\3\k\S ]] 00:46:48.223 11:55:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:46:48.223 11:55:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.223 11:55:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.223 11:55:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.223 11:55:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.223 11:55:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.223 11:55:17 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:46:48.223 11:55:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:46:48.223 11:55:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:48.223 11:55:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.484 11:55:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.484 11:55:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.484 11:55:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:48.484 11:55:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:48.484 11:55:17 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.484 11:55:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.745 [2024-06-10 11:55:17.587682] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:48.745 nvme0n1 00:46:48.745 11:55:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:46:48.745 11:55:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.745 11:55:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.745 11:55:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.745 11:55:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.745 11:55:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.006 11:55:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:46:49.006 11:55:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:46:49.006 11:55:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:49.006 11:55:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.006 11:55:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.006 11:55:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.006 11:55:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:49.267 11:55:18 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:46:49.267 11:55:18 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:49.267 Running I/O for 1 seconds... 00:46:50.655 00:46:50.655 Latency(us) 00:46:50.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:50.655 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:50.655 nvme0n1 : 1.01 10853.02 42.39 0.00 0.00 11756.48 5925.55 23702.19 00:46:50.655 =================================================================================================================== 00:46:50.655 Total : 10853.02 42.39 0.00 0.00 11756.48 5925.55 23702.19 00:46:50.655 0 00:46:50.655 11:55:19 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:50.655 11:55:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:50.655 11:55:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.916 11:55:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:46:50.916 11:55:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.916 11:55:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:50.916 11:55:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:50.916 11:55:19 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:50.916 11:55:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:51.177 [2024-06-10 11:55:20.046577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:51.177 [2024-06-10 11:55:20.047252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e3b0 (107): Transport endpoint is not connected 00:46:51.177 [2024-06-10 11:55:20.048247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e3b0 (9): Bad file descriptor 00:46:51.177 [2024-06-10 11:55:20.049248] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:51.177 [2024-06-10 11:55:20.049257] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:51.177 [2024-06-10 11:55:20.049264] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:51.177 request: 00:46:51.177 { 00:46:51.177 "name": "nvme0", 00:46:51.177 "trtype": "tcp", 00:46:51.177 "traddr": "127.0.0.1", 00:46:51.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:51.177 "adrfam": "ipv4", 00:46:51.177 "trsvcid": "4420", 00:46:51.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:51.177 "psk": "key1", 00:46:51.177 "method": "bdev_nvme_attach_controller", 00:46:51.177 "req_id": 1 00:46:51.177 } 00:46:51.177 Got JSON-RPC error response 00:46:51.177 response: 00:46:51.177 { 00:46:51.177 "code": -5, 00:46:51.177 "message": "Input/output error" 00:46:51.177 } 00:46:51.178 11:55:20 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:51.178 11:55:20 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:51.178 11:55:20 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:51.178 11:55:20 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:51.178 11:55:20 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:46:51.178 11:55:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:51.178 11:55:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:51.178 11:55:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:51.178 11:55:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:51.178 11:55:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.438 11:55:20 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:46:51.438 11:55:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:46:51.438 11:55:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:51.438 11:55:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:51.438 11:55:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:51.438 11:55:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:51.438 11:55:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.699 11:55:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:51.699 11:55:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:46:51.699 11:55:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:51.959 11:55:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:46:51.959 11:55:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:51.959 11:55:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:46:51.959 11:55:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:46:51.959 11:55:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.220 11:55:21 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:46:52.220 11:55:21 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.iz1bjr76UR 00:46:52.220 11:55:21 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:52.220 11:55:21 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.220 11:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.481 [2024-06-10 11:55:21.290628] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iz1bjr76UR': 0100660 00:46:52.481 [2024-06-10 11:55:21.290650] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:52.481 request: 00:46:52.481 { 00:46:52.481 "name": "key0", 00:46:52.481 "path": "/tmp/tmp.iz1bjr76UR", 00:46:52.481 "method": "keyring_file_add_key", 00:46:52.481 "req_id": 1 00:46:52.481 } 00:46:52.481 Got JSON-RPC error response 00:46:52.481 response: 00:46:52.481 { 00:46:52.481 "code": -1, 00:46:52.481 "message": "Operation not permitted" 00:46:52.481 } 00:46:52.481 11:55:21 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:52.481 11:55:21 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:52.481 11:55:21 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:52.481 11:55:21 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:52.481 11:55:21 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.iz1bjr76UR 00:46:52.481 11:55:21 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.481 11:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iz1bjr76UR 00:46:52.742 11:55:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.iz1bjr76UR 00:46:52.742 11:55:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:46:52.742 11:55:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:52.742 11:55:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.742 11:55:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.742 11:55:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:52.742 11:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.003 11:55:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:46:53.003 11:55:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.003 11:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.003 [2024-06-10 11:55:21.936275] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iz1bjr76UR': No such file or directory 00:46:53.003 [2024-06-10 11:55:21.936293] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:53.003 [2024-06-10 11:55:21.936317] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:53.003 [2024-06-10 11:55:21.936323] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:53.003 [2024-06-10 11:55:21.936334] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:53.003 request: 00:46:53.003 { 00:46:53.003 "name": "nvme0", 00:46:53.003 "trtype": "tcp", 00:46:53.003 "traddr": "127.0.0.1", 00:46:53.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:53.003 "adrfam": "ipv4", 00:46:53.003 "trsvcid": "4420", 00:46:53.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:53.003 "psk": "key0", 00:46:53.003 "method": "bdev_nvme_attach_controller", 00:46:53.003 "req_id": 1 00:46:53.003 } 00:46:53.003 Got JSON-RPC error response 00:46:53.003 response: 00:46:53.003 { 00:46:53.003 "code": -19, 00:46:53.003 "message": "No such device" 00:46:53.003 } 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:53.003 11:55:21 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:53.003 11:55:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:46:53.003 11:55:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:53.265 11:55:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z4Cljt6taz 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:46:53.265 11:55:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z4Cljt6taz 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z4Cljt6taz 00:46:53.265 11:55:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.z4Cljt6taz 00:46:53.265 11:55:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4Cljt6taz 00:46:53.265 11:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4Cljt6taz 00:46:53.525 11:55:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.525 11:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.787 nvme0n1 00:46:53.787 11:55:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:46:53.787 11:55:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:53.787 11:55:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:53.787 11:55:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.787 11:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.787 11:55:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.048 11:55:22 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:46:54.048 11:55:22 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:46:54.049 11:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:54.310 11:55:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:46:54.310 11:55:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.310 11:55:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:46:54.310 11:55:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.310 11:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.571 11:55:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:46:54.571 11:55:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:54.571 11:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:54.831 11:55:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:46:54.831 11:55:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:46:54.831 11:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.093 11:55:23 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:46:55.093 11:55:23 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4Cljt6taz 00:46:55.093 11:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4Cljt6taz 00:46:55.353 11:55:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jFIanM83kS 00:46:55.353 11:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jFIanM83kS 00:46:55.353 11:55:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:55.354 11:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:55.614 nvme0n1 00:46:55.614 11:55:24 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:46:55.614 11:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:55.875 11:55:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:46:55.875 "subsystems": [ 00:46:55.875 { 00:46:55.875 "subsystem": "keyring", 00:46:55.875 "config": [ 00:46:55.875 { 00:46:55.875 "method": "keyring_file_add_key", 00:46:55.875 "params": { 00:46:55.875 "name": "key0", 00:46:55.875 "path": "/tmp/tmp.z4Cljt6taz" 00:46:55.875 } 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "method": "keyring_file_add_key", 00:46:55.875 "params": { 00:46:55.875 "name": "key1", 00:46:55.875 "path": "/tmp/tmp.jFIanM83kS" 00:46:55.875 } 00:46:55.875 } 00:46:55.875 ] 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "subsystem": "iobuf", 00:46:55.875 "config": [ 00:46:55.875 { 00:46:55.875 "method": "iobuf_set_options", 00:46:55.875 "params": { 00:46:55.875 "small_pool_count": 8192, 00:46:55.875 "large_pool_count": 1024, 00:46:55.875 "small_bufsize": 8192, 00:46:55.875 "large_bufsize": 135168 00:46:55.875 } 00:46:55.875 } 00:46:55.875 ] 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "subsystem": "sock", 00:46:55.875 "config": [ 00:46:55.875 { 00:46:55.875 "method": "sock_set_default_impl", 00:46:55.875 "params": { 00:46:55.875 "impl_name": "posix" 00:46:55.875 } 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "method": "sock_impl_set_options", 00:46:55.875 "params": { 00:46:55.875 "impl_name": "ssl", 00:46:55.875 "recv_buf_size": 4096, 00:46:55.875 "send_buf_size": 4096, 00:46:55.875 "enable_recv_pipe": true, 00:46:55.875 "enable_quickack": false, 00:46:55.875 "enable_placement_id": 0, 00:46:55.875 "enable_zerocopy_send_server": true, 00:46:55.875 "enable_zerocopy_send_client": false, 00:46:55.875 "zerocopy_threshold": 0, 00:46:55.875 "tls_version": 0, 00:46:55.875 "enable_ktls": false 00:46:55.875 } 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "method": "sock_impl_set_options", 00:46:55.875 "params": { 00:46:55.875 "impl_name": "posix", 00:46:55.875 "recv_buf_size": 2097152, 00:46:55.875 "send_buf_size": 2097152, 00:46:55.875 "enable_recv_pipe": true, 00:46:55.875 "enable_quickack": false, 00:46:55.875 "enable_placement_id": 0, 00:46:55.875 "enable_zerocopy_send_server": true, 00:46:55.875 "enable_zerocopy_send_client": false, 00:46:55.875 "zerocopy_threshold": 0, 00:46:55.875 "tls_version": 0, 00:46:55.875 "enable_ktls": false 00:46:55.875 } 00:46:55.875 } 00:46:55.875 ] 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "subsystem": "vmd", 00:46:55.875 "config": [] 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "subsystem": "accel", 00:46:55.875 "config": [ 00:46:55.875 { 00:46:55.875 "method": "accel_set_options", 00:46:55.875 "params": { 00:46:55.875 "small_cache_size": 128, 00:46:55.875 "large_cache_size": 16, 00:46:55.875 "task_count": 2048, 00:46:55.875 "sequence_count": 2048, 00:46:55.875 "buf_count": 2048 00:46:55.875 } 00:46:55.875 } 00:46:55.875 ] 00:46:55.875 }, 00:46:55.875 { 00:46:55.875 "subsystem": "bdev", 00:46:55.875 "config": [ 00:46:55.875 { 00:46:55.875 "method": "bdev_set_options", 00:46:55.875 "params": { 00:46:55.875 "bdev_io_pool_size": 65535, 00:46:55.875 "bdev_io_cache_size": 256, 00:46:55.876 "bdev_auto_examine": true, 00:46:55.876 "iobuf_small_cache_size": 128, 00:46:55.876 "iobuf_large_cache_size": 16 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_raid_set_options", 00:46:55.876 "params": { 00:46:55.876 "process_window_size_kb": 1024 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_iscsi_set_options", 00:46:55.876 "params": { 00:46:55.876 "timeout_sec": 30 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_nvme_set_options", 00:46:55.876 "params": { 00:46:55.876 "action_on_timeout": "none", 00:46:55.876 "timeout_us": 0, 00:46:55.876 "timeout_admin_us": 0, 00:46:55.876 "keep_alive_timeout_ms": 10000, 00:46:55.876 "arbitration_burst": 0, 00:46:55.876 "low_priority_weight": 0, 00:46:55.876 "medium_priority_weight": 0, 00:46:55.876 "high_priority_weight": 0, 00:46:55.876 "nvme_adminq_poll_period_us": 10000, 00:46:55.876 "nvme_ioq_poll_period_us": 0, 00:46:55.876 "io_queue_requests": 512, 00:46:55.876 "delay_cmd_submit": true, 00:46:55.876 "transport_retry_count": 4, 00:46:55.876 "bdev_retry_count": 3, 00:46:55.876 "transport_ack_timeout": 0, 00:46:55.876 "ctrlr_loss_timeout_sec": 0, 00:46:55.876 "reconnect_delay_sec": 0, 00:46:55.876 "fast_io_fail_timeout_sec": 0, 00:46:55.876 "disable_auto_failback": false, 00:46:55.876 "generate_uuids": false, 00:46:55.876 "transport_tos": 0, 00:46:55.876 "nvme_error_stat": false, 00:46:55.876 "rdma_srq_size": 0, 00:46:55.876 "io_path_stat": false, 00:46:55.876 "allow_accel_sequence": false, 00:46:55.876 "rdma_max_cq_size": 0, 00:46:55.876 "rdma_cm_event_timeout_ms": 0, 00:46:55.876 "dhchap_digests": [ 00:46:55.876 "sha256", 00:46:55.876 "sha384", 00:46:55.876 "sha512" 00:46:55.876 ], 00:46:55.876 "dhchap_dhgroups": [ 00:46:55.876 "null", 00:46:55.876 "ffdhe2048", 00:46:55.876 "ffdhe3072", 00:46:55.876 "ffdhe4096", 00:46:55.876 "ffdhe6144", 00:46:55.876 "ffdhe8192" 00:46:55.876 ] 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_nvme_attach_controller", 00:46:55.876 "params": { 00:46:55.876 "name": "nvme0", 00:46:55.876 "trtype": "TCP", 00:46:55.876 "adrfam": "IPv4", 00:46:55.876 "traddr": "127.0.0.1", 00:46:55.876 "trsvcid": "4420", 00:46:55.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:55.876 "prchk_reftag": false, 00:46:55.876 "prchk_guard": false, 00:46:55.876 "ctrlr_loss_timeout_sec": 0, 00:46:55.876 "reconnect_delay_sec": 0, 00:46:55.876 "fast_io_fail_timeout_sec": 0, 00:46:55.876 "psk": "key0", 00:46:55.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:55.876 "hdgst": false, 00:46:55.876 "ddgst": false 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_nvme_set_hotplug", 00:46:55.876 "params": { 00:46:55.876 "period_us": 100000, 00:46:55.876 "enable": false 00:46:55.876 } 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "method": "bdev_wait_for_examine" 00:46:55.876 } 00:46:55.876 ] 00:46:55.876 }, 00:46:55.876 { 00:46:55.876 "subsystem": "nbd", 00:46:55.876 "config": [] 00:46:55.876 } 00:46:55.876 ] 00:46:55.876 }' 00:46:55.876 11:55:24 keyring_file -- keyring/file.sh@114 -- # killprocess 2566285 00:46:55.876 11:55:24 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2566285 ']' 00:46:55.876 11:55:24 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2566285 00:46:55.876 11:55:24 keyring_file -- common/autotest_common.sh@954 -- # uname 00:46:55.876 11:55:24 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:55.876 11:55:24 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2566285 00:46:56.138 11:55:24 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:56.138 11:55:24 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:56.138 11:55:24 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2566285' 00:46:56.138 killing process with pid 2566285 00:46:56.138 11:55:24 keyring_file -- common/autotest_common.sh@968 -- # kill 2566285 00:46:56.138 Received shutdown signal, test time was about 1.000000 seconds 00:46:56.138 00:46:56.138 Latency(us) 00:46:56.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.138 =================================================================================================================== 00:46:56.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:56.138 11:55:24 keyring_file -- common/autotest_common.sh@973 -- # wait 2566285 00:46:56.138 11:55:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=2568101 00:46:56.138 11:55:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2568101 /var/tmp/bperf.sock 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2568101 ']' 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:56.138 11:55:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:56.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:56.138 11:55:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:56.138 11:55:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:46:56.138 "subsystems": [ 00:46:56.138 { 00:46:56.138 "subsystem": "keyring", 00:46:56.138 "config": [ 00:46:56.138 { 00:46:56.138 "method": "keyring_file_add_key", 00:46:56.138 "params": { 00:46:56.138 "name": "key0", 00:46:56.138 "path": "/tmp/tmp.z4Cljt6taz" 00:46:56.138 } 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "method": "keyring_file_add_key", 00:46:56.138 "params": { 00:46:56.138 "name": "key1", 00:46:56.138 "path": "/tmp/tmp.jFIanM83kS" 00:46:56.138 } 00:46:56.138 } 00:46:56.138 ] 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "subsystem": "iobuf", 00:46:56.138 "config": [ 00:46:56.138 { 00:46:56.138 "method": "iobuf_set_options", 00:46:56.138 "params": { 00:46:56.138 "small_pool_count": 8192, 00:46:56.138 "large_pool_count": 1024, 00:46:56.138 "small_bufsize": 8192, 00:46:56.138 "large_bufsize": 135168 00:46:56.138 } 00:46:56.138 } 00:46:56.138 ] 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "subsystem": "sock", 00:46:56.138 "config": [ 00:46:56.138 { 00:46:56.138 "method": "sock_set_default_impl", 00:46:56.138 "params": { 00:46:56.138 "impl_name": "posix" 00:46:56.138 } 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "method": "sock_impl_set_options", 00:46:56.138 "params": { 00:46:56.138 "impl_name": "ssl", 00:46:56.138 "recv_buf_size": 4096, 00:46:56.138 "send_buf_size": 4096, 00:46:56.138 "enable_recv_pipe": true, 00:46:56.138 "enable_quickack": false, 00:46:56.138 "enable_placement_id": 0, 00:46:56.138 "enable_zerocopy_send_server": true, 00:46:56.138 "enable_zerocopy_send_client": false, 00:46:56.138 "zerocopy_threshold": 0, 00:46:56.138 "tls_version": 0, 00:46:56.138 "enable_ktls": false 00:46:56.138 } 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "method": "sock_impl_set_options", 00:46:56.138 "params": { 00:46:56.138 "impl_name": "posix", 00:46:56.138 "recv_buf_size": 2097152, 00:46:56.138 "send_buf_size": 2097152, 00:46:56.138 "enable_recv_pipe": true, 00:46:56.138 "enable_quickack": false, 00:46:56.138 "enable_placement_id": 0, 00:46:56.138 "enable_zerocopy_send_server": true, 00:46:56.138 "enable_zerocopy_send_client": false, 00:46:56.138 "zerocopy_threshold": 0, 00:46:56.138 "tls_version": 0, 00:46:56.138 "enable_ktls": false 00:46:56.138 } 00:46:56.138 } 00:46:56.138 ] 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "subsystem": "vmd", 00:46:56.138 "config": [] 00:46:56.138 }, 00:46:56.138 { 00:46:56.138 "subsystem": "accel", 00:46:56.138 "config": [ 00:46:56.138 { 00:46:56.138 "method": "accel_set_options", 00:46:56.138 "params": { 00:46:56.138 "small_cache_size": 128, 00:46:56.138 "large_cache_size": 16, 00:46:56.139 "task_count": 2048, 00:46:56.139 "sequence_count": 2048, 00:46:56.139 "buf_count": 2048 00:46:56.139 } 00:46:56.139 } 00:46:56.139 ] 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "subsystem": "bdev", 00:46:56.139 "config": [ 00:46:56.139 { 00:46:56.139 "method": "bdev_set_options", 00:46:56.139 "params": { 00:46:56.139 "bdev_io_pool_size": 65535, 00:46:56.139 "bdev_io_cache_size": 256, 00:46:56.139 "bdev_auto_examine": true, 00:46:56.139 "iobuf_small_cache_size": 128, 00:46:56.139 "iobuf_large_cache_size": 16 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_raid_set_options", 00:46:56.139 "params": { 00:46:56.139 "process_window_size_kb": 1024 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_iscsi_set_options", 00:46:56.139 "params": { 00:46:56.139 "timeout_sec": 30 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_nvme_set_options", 00:46:56.139 "params": { 00:46:56.139 "action_on_timeout": "none", 00:46:56.139 "timeout_us": 0, 00:46:56.139 "timeout_admin_us": 0, 00:46:56.139 "keep_alive_timeout_ms": 10000, 00:46:56.139 "arbitration_burst": 0, 00:46:56.139 "low_priority_weight": 0, 00:46:56.139 "medium_priority_weight": 0, 00:46:56.139 "high_priority_weight": 0, 00:46:56.139 "nvme_adminq_poll_period_us": 10000, 00:46:56.139 "nvme_ioq_poll_period_us": 0, 00:46:56.139 "io_queue_requests": 512, 00:46:56.139 "delay_cmd_submit": true, 00:46:56.139 "transport_retry_count": 4, 00:46:56.139 "bdev_retry_count": 3, 00:46:56.139 "transport_ack_timeout": 0, 00:46:56.139 "ctrlr_loss_timeout_sec": 0, 00:46:56.139 "reconnect_delay_sec": 0, 00:46:56.139 "fast_io_fail_timeout_sec": 0, 00:46:56.139 "disable_auto_failback": false, 00:46:56.139 "generate_uuids": false, 00:46:56.139 "transport_tos": 0, 00:46:56.139 "nvme_error_stat": false, 00:46:56.139 "rdma_srq_size": 0, 00:46:56.139 "io_path_stat": false, 00:46:56.139 "allow_accel_sequence": false, 00:46:56.139 "rdma_max_cq_size": 0, 00:46:56.139 "rdma_cm_event_timeout_ms": 0, 00:46:56.139 "dhchap_digests": [ 00:46:56.139 "sha256", 00:46:56.139 "sha384", 00:46:56.139 "sha512" 00:46:56.139 ], 00:46:56.139 "dhchap_dhgroups": [ 00:46:56.139 "null", 00:46:56.139 "ffdhe2048", 00:46:56.139 "ffdhe3072", 00:46:56.139 "ffdhe4096", 00:46:56.139 "ffdhe6144", 00:46:56.139 "ffdhe8192" 00:46:56.139 ] 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_nvme_attach_controller", 00:46:56.139 "params": { 00:46:56.139 "name": "nvme0", 00:46:56.139 "trtype": "TCP", 00:46:56.139 "adrfam": "IPv4", 00:46:56.139 "traddr": "127.0.0.1", 00:46:56.139 "trsvcid": "4420", 00:46:56.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.139 "prchk_reftag": false, 00:46:56.139 "prchk_guard": false, 00:46:56.139 "ctrlr_loss_timeout_sec": 0, 00:46:56.139 "reconnect_delay_sec": 0, 00:46:56.139 "fast_io_fail_timeout_sec": 0, 00:46:56.139 "psk": "key0", 00:46:56.139 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.139 "hdgst": false, 00:46:56.139 "ddgst": false 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_nvme_set_hotplug", 00:46:56.139 "params": { 00:46:56.139 "period_us": 100000, 00:46:56.139 "enable": false 00:46:56.139 } 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "method": "bdev_wait_for_examine" 00:46:56.139 } 00:46:56.139 ] 00:46:56.139 }, 00:46:56.139 { 00:46:56.139 "subsystem": "nbd", 00:46:56.139 "config": [] 00:46:56.139 } 00:46:56.139 ] 00:46:56.139 }' 00:46:56.139 [2024-06-10 11:55:25.063902] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:56.139 [2024-06-10 11:55:25.063962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568101 ] 00:46:56.139 EAL: No free 2048 kB hugepages reported on node 1 00:46:56.400 [2024-06-10 11:55:25.121242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:56.400 [2024-06-10 11:55:25.184778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:56.400 [2024-06-10 11:55:25.331472] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:56.971 11:55:25 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:56.971 11:55:25 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:46:56.971 11:55:25 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:46:56.971 11:55:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.971 11:55:25 keyring_file -- keyring/file.sh@120 -- # jq length 00:46:57.233 11:55:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:46:57.233 11:55:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:46:57.233 11:55:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:57.233 11:55:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:57.233 11:55:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:57.233 11:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.233 11:55:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:57.493 11:55:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:57.493 11:55:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:46:57.493 11:55:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:57.493 11:55:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:57.494 11:55:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:57.494 11:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.494 11:55:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:57.754 11:55:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:46:57.754 11:55:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:46:57.754 11:55:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:46:57.754 11:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:58.016 11:55:26 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:46:58.016 11:55:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:58.016 11:55:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.z4Cljt6taz /tmp/tmp.jFIanM83kS 00:46:58.016 11:55:26 keyring_file -- keyring/file.sh@20 -- # killprocess 2568101 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2568101 ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2568101 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@954 -- # uname 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2568101 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2568101' 00:46:58.016 killing process with pid 2568101 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@968 -- # kill 2568101 00:46:58.016 Received shutdown signal, test time was about 1.000000 seconds 00:46:58.016 00:46:58.016 Latency(us) 00:46:58.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:58.016 =================================================================================================================== 00:46:58.016 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@973 -- # wait 2568101 00:46:58.016 11:55:26 keyring_file -- keyring/file.sh@21 -- # killprocess 2565951 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2565951 ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2565951 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@954 -- # uname 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2565951 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2565951' 00:46:58.016 killing process with pid 2565951 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@968 -- # kill 2565951 00:46:58.016 [2024-06-10 11:55:26.971951] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:58.016 11:55:26 keyring_file -- common/autotest_common.sh@973 -- # wait 2565951 00:46:58.277 00:46:58.277 real 0m12.567s 00:46:58.277 user 0m30.640s 00:46:58.277 sys 0m2.839s 00:46:58.277 11:55:27 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:46:58.277 11:55:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:58.277 ************************************ 00:46:58.277 END TEST keyring_file 00:46:58.277 ************************************ 00:46:58.277 11:55:27 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:46:58.277 11:55:27 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:58.277 11:55:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:46:58.277 11:55:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:46:58.277 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:46:58.539 ************************************ 00:46:58.539 START TEST keyring_linux 00:46:58.539 ************************************ 00:46:58.539 11:55:27 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:58.539 * Looking for test storage... 00:46:58.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:58.539 11:55:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:58.539 11:55:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:58.539 11:55:27 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:58.539 11:55:27 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:58.539 11:55:27 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:58.539 11:55:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.539 11:55:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.539 11:55:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.539 11:55:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:58.539 11:55:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:58.539 11:55:27 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:58.539 11:55:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:58.539 11:55:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:58.539 11:55:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:58.540 /tmp/:spdk-test:key0 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:46:58.540 11:55:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:58.540 11:55:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:58.540 /tmp/:spdk-test:key1 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2568533 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2568533 00:46:58.540 11:55:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2568533 ']' 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:58.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:58.540 11:55:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:58.802 [2024-06-10 11:55:27.559892] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:58.802 [2024-06-10 11:55:27.559961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568533 ] 00:46:58.802 EAL: No free 2048 kB hugepages reported on node 1 00:46:58.802 [2024-06-10 11:55:27.626226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:58.802 [2024-06-10 11:55:27.700963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:59.745 11:55:28 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:59.745 11:55:28 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:46:59.745 11:55:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:59.745 11:55:28 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:59.745 11:55:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:59.745 [2024-06-10 11:55:28.360434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:59.745 null0 00:46:59.745 [2024-06-10 11:55:28.392483] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:59.745 [2024-06-10 11:55:28.392881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:59.745 11:55:28 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:59.746 107907375 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:59.746 701196033 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2568865 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2568865 /var/tmp/bperf.sock 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2568865 ']' 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:59.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:59.746 [2024-06-10 11:55:28.475275] Starting SPDK v24.09-pre git sha1 ee2eae53a / DPDK 24.03.0 initialization... 00:46:59.746 [2024-06-10 11:55:28.475324] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568865 ] 00:46:59.746 EAL: No free 2048 kB hugepages reported on node 1 00:46:59.746 [2024-06-10 11:55:28.533006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.746 [2024-06-10 11:55:28.596911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:59.746 11:55:28 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:46:59.746 11:55:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:59.746 11:55:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:00.007 11:55:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:00.007 11:55:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:00.268 11:55:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:00.268 11:55:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:00.529 [2024-06-10 11:55:29.283196] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:00.529 nvme0n1 00:47:00.529 11:55:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:00.529 11:55:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:00.529 11:55:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:00.529 11:55:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:00.529 11:55:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.529 11:55:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:00.791 11:55:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:00.791 11:55:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:00.791 11:55:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:00.791 11:55:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:00.791 11:55:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:00.791 11:55:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:00.791 11:55:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:01.052 11:55:29 keyring_linux -- keyring/linux.sh@25 -- # sn=107907375 00:47:01.052 11:55:29 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:01.052 11:55:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:01.052 11:55:29 keyring_linux -- keyring/linux.sh@26 -- # [[ 107907375 == \1\0\7\9\0\7\3\7\5 ]] 00:47:01.052 11:55:29 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 107907375 00:47:01.053 11:55:29 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:01.053 11:55:29 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:01.053 Running I/O for 1 seconds... 00:47:01.994 00:47:01.994 Latency(us) 00:47:01.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:01.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:01.994 nvme0n1 : 1.01 11017.69 43.04 0.00 0.00 11560.83 8137.39 20206.93 00:47:01.994 =================================================================================================================== 00:47:01.994 Total : 11017.69 43.04 0.00 0.00 11560.83 8137.39 20206.93 00:47:01.994 0 00:47:01.994 11:55:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:01.994 11:55:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:02.255 11:55:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:02.255 11:55:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:02.255 11:55:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:02.255 11:55:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:02.255 11:55:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:02.256 11:55:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.516 11:55:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:02.516 11:55:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:02.516 11:55:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:02.516 11:55:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:47:02.516 11:55:31 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:02.516 11:55:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:02.778 [2024-06-10 11:55:31.564922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:02.778 [2024-06-10 11:55:31.565370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a3b0 (107): Transport endpoint is not connected 00:47:02.778 [2024-06-10 11:55:31.566364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0a3b0 (9): Bad file descriptor 00:47:02.778 [2024-06-10 11:55:31.567366] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:47:02.778 [2024-06-10 11:55:31.567375] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:02.778 [2024-06-10 11:55:31.567382] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:47:02.778 request: 00:47:02.778 { 00:47:02.778 "name": "nvme0", 00:47:02.778 "trtype": "tcp", 00:47:02.778 "traddr": "127.0.0.1", 00:47:02.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:02.778 "adrfam": "ipv4", 00:47:02.778 "trsvcid": "4420", 00:47:02.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:02.778 "psk": ":spdk-test:key1", 00:47:02.778 "method": "bdev_nvme_attach_controller", 00:47:02.778 "req_id": 1 00:47:02.778 } 00:47:02.778 Got JSON-RPC error response 00:47:02.778 response: 00:47:02.778 { 00:47:02.778 "code": -5, 00:47:02.778 "message": "Input/output error" 00:47:02.778 } 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@33 -- # sn=107907375 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 107907375 00:47:02.778 1 links removed 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@33 -- # sn=701196033 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 701196033 00:47:02.778 1 links removed 00:47:02.778 11:55:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2568865 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2568865 ']' 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2568865 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2568865 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2568865' 00:47:02.778 killing process with pid 2568865 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@968 -- # kill 2568865 00:47:02.778 Received shutdown signal, test time was about 1.000000 seconds 00:47:02.778 00:47:02.778 Latency(us) 00:47:02.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:02.778 =================================================================================================================== 00:47:02.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:02.778 11:55:31 keyring_linux -- common/autotest_common.sh@973 -- # wait 2568865 00:47:03.039 11:55:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2568533 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2568533 ']' 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2568533 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2568533 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:47:03.039 11:55:31 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:47:03.040 11:55:31 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2568533' 00:47:03.040 killing process with pid 2568533 00:47:03.040 11:55:31 keyring_linux -- common/autotest_common.sh@968 -- # kill 2568533 00:47:03.040 11:55:31 keyring_linux -- common/autotest_common.sh@973 -- # wait 2568533 00:47:03.301 00:47:03.301 real 0m4.787s 00:47:03.301 user 0m8.548s 00:47:03.301 sys 0m1.441s 00:47:03.301 11:55:32 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:03.301 11:55:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:03.301 ************************************ 00:47:03.301 END TEST keyring_linux 00:47:03.301 ************************************ 00:47:03.301 11:55:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:47:03.301 11:55:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:47:03.301 11:55:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:47:03.301 11:55:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:47:03.301 11:55:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:47:03.301 11:55:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:47:03.301 11:55:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:47:03.301 11:55:32 -- common/autotest_common.sh@723 -- # xtrace_disable 00:47:03.301 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:47:03.301 11:55:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:47:03.301 11:55:32 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:47:03.301 11:55:32 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:47:03.302 11:55:32 -- common/autotest_common.sh@10 -- # set +x 00:47:11.444 INFO: APP EXITING 00:47:11.444 INFO: killing all VMs 00:47:11.444 INFO: killing vhost app 00:47:11.444 WARN: no vhost pid file found 00:47:11.444 INFO: EXIT DONE 00:47:14.004 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:14.004 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:14.264 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:14.264 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:17.645 Cleaning 00:47:17.645 Removing: /var/run/dpdk/spdk0/config 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:17.645 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:17.645 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:17.645 Removing: /var/run/dpdk/spdk1/config 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:17.905 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:17.905 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:17.905 Removing: /var/run/dpdk/spdk1/mp_socket 00:47:17.905 Removing: /var/run/dpdk/spdk2/config 00:47:17.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:17.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:17.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:17.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:17.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:17.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:17.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:17.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:17.906 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:17.906 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:17.906 Removing: /var/run/dpdk/spdk3/config 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:17.906 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:17.906 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:17.906 Removing: /var/run/dpdk/spdk4/config 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:17.906 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:17.906 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:17.906 Removing: /dev/shm/bdev_svc_trace.1 00:47:17.906 Removing: /dev/shm/nvmf_trace.0 00:47:17.906 Removing: /dev/shm/spdk_tgt_trace.pid2111956 00:47:17.906 Removing: /var/run/dpdk/spdk0 00:47:17.906 Removing: /var/run/dpdk/spdk1 00:47:17.906 Removing: /var/run/dpdk/spdk2 00:47:17.906 Removing: /var/run/dpdk/spdk3 00:47:17.906 Removing: /var/run/dpdk/spdk4 00:47:17.906 Removing: /var/run/dpdk/spdk_pid2110478 00:47:17.906 Removing: /var/run/dpdk/spdk_pid2111956 00:47:17.906 Removing: /var/run/dpdk/spdk_pid2112473 00:47:17.906 Removing: /var/run/dpdk/spdk_pid2113649 00:47:17.906 Removing: /var/run/dpdk/spdk_pid2113842 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2114913 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2115079 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2115360 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2116495 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2117237 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2117591 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2117950 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2118338 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2118517 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2118869 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2119184 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2119416 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2120637 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2124252 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2124473 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2124656 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2124760 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2125357 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2125360 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2125754 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2125934 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2126182 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2126365 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2126477 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2126813 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2127252 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2127605 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2127983 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2128204 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2128395 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2128457 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2128810 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2129125 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2129316 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2129551 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2129898 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2130247 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2130603 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2130808 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2131006 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2131341 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2131693 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2132040 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2132329 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2132509 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2132781 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2133134 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2133486 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2133809 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2134043 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2134339 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2134665 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2135053 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2140043 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2192006 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2197754 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2209514 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2215892 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2220573 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2221395 00:47:18.167 Removing: /var/run/dpdk/spdk_pid2235241 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2235245 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2236249 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2237257 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2238327 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2238962 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2239110 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2239337 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2239604 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2239606 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2240618 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2241622 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2242740 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2243407 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2243463 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2243761 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2245631 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2246695 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2256704 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2257062 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2262104 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2269201 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2272573 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2284386 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2295168 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2297637 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2298670 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2319259 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2323804 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2357931 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2363090 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2365071 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2367197 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2367421 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2367432 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2367577 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2368146 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2370162 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2371228 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2371725 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2374319 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2375021 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2375724 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2380733 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2393253 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2398077 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2405290 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2406777 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2408297 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2413511 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2418400 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2427285 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2427389 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2432331 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2432515 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2432848 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2433224 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2433357 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2438905 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2439609 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2445010 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2448647 00:47:18.429 Removing: /var/run/dpdk/spdk_pid2455187 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2461403 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2471939 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2480230 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2480252 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2502603 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2503277 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2503961 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2504596 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2505372 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2506053 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2506731 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2507409 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2512445 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2512787 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2519809 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2520192 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2522736 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2530144 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2530160 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2536004 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2538201 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2540561 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2541903 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2544405 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2545778 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2556128 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2556788 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2557454 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2560086 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2560739 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2561412 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2565951 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2566285 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2568101 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2568533 00:47:18.692 Removing: /var/run/dpdk/spdk_pid2568865 00:47:18.692 Clean 00:47:18.692 11:55:47 -- common/autotest_common.sh@1450 -- # return 0 00:47:18.692 11:55:47 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:47:18.692 11:55:47 -- common/autotest_common.sh@729 -- # xtrace_disable 00:47:18.692 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:47:18.953 11:55:47 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:47:18.953 11:55:47 -- common/autotest_common.sh@729 -- # xtrace_disable 00:47:18.953 11:55:47 -- common/autotest_common.sh@10 -- # set +x 00:47:18.953 11:55:47 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:18.953 11:55:47 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:18.953 11:55:47 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:18.953 11:55:47 -- spdk/autotest.sh@391 -- # hash lcov 00:47:18.953 11:55:47 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:47:18.953 11:55:47 -- spdk/autotest.sh@393 -- # hostname 00:47:18.953 11:55:47 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-10 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:18.953 geninfo: WARNING: invalid characters removed from testname! 00:47:45.540 11:56:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:46.480 11:56:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:49.020 11:56:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:50.928 11:56:19 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:53.469 11:56:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:55.473 11:56:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:58.024 11:56:26 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:58.024 11:56:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:58.024 11:56:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:58.024 11:56:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:58.024 11:56:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:58.024 11:56:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:58.024 11:56:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:58.024 11:56:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:58.024 11:56:26 -- paths/export.sh@5 -- $ export PATH 00:47:58.024 11:56:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:58.024 11:56:26 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:58.024 11:56:26 -- common/autobuild_common.sh@437 -- $ date +%s 00:47:58.024 11:56:26 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013386.XXXXXX 00:47:58.024 11:56:26 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013386.YLA3ke 00:47:58.024 11:56:26 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:47:58.024 11:56:26 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:47:58.024 11:56:26 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:47:58.024 11:56:26 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:58.024 11:56:26 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:58.024 11:56:26 -- common/autobuild_common.sh@453 -- $ get_config_params 00:47:58.024 11:56:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:47:58.024 11:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:47:58.024 11:56:26 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:47:58.024 11:56:26 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:47:58.024 11:56:26 -- pm/common@17 -- $ local monitor 00:47:58.024 11:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.024 11:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.024 11:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.024 11:56:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.024 11:56:26 -- pm/common@21 -- $ date +%s 00:47:58.024 11:56:26 -- pm/common@21 -- $ date +%s 00:47:58.024 11:56:26 -- pm/common@25 -- $ sleep 1 00:47:58.024 11:56:26 -- pm/common@21 -- $ date +%s 00:47:58.024 11:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013386 00:47:58.024 11:56:26 -- pm/common@21 -- $ date +%s 00:47:58.024 11:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013386 00:47:58.024 11:56:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013386 00:47:58.024 11:56:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013386 00:47:58.024 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013386_collect-vmstat.pm.log 00:47:58.024 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013386_collect-cpu-temp.pm.log 00:47:58.024 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013386_collect-cpu-load.pm.log 00:47:58.024 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013386_collect-bmc-pm.bmc.pm.log 00:47:58.967 11:56:27 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:47:58.967 11:56:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:47:58.967 11:56:27 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:58.967 11:56:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:47:58.967 11:56:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:47:58.967 11:56:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:47:58.967 11:56:27 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:58.967 11:56:27 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:47:58.967 11:56:27 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:58.967 11:56:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:47:58.967 11:56:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:58.967 11:56:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:58.967 11:56:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:58.967 11:56:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.968 11:56:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:58.968 11:56:27 -- pm/common@44 -- $ pid=2580911 00:47:58.968 11:56:27 -- pm/common@50 -- $ kill -TERM 2580911 00:47:58.968 11:56:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.968 11:56:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:58.968 11:56:27 -- pm/common@44 -- $ pid=2580912 00:47:58.968 11:56:27 -- pm/common@50 -- $ kill -TERM 2580912 00:47:58.968 11:56:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.968 11:56:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:58.968 11:56:27 -- pm/common@44 -- $ pid=2580914 00:47:58.968 11:56:27 -- pm/common@50 -- $ kill -TERM 2580914 00:47:58.968 11:56:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:58.968 11:56:27 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:58.968 11:56:27 -- pm/common@44 -- $ pid=2580941 00:47:58.968 11:56:27 -- pm/common@50 -- $ sudo -E kill -TERM 2580941 00:47:58.968 + [[ -n 1991964 ]] 00:47:58.968 + sudo kill 1991964 00:47:59.239 [Pipeline] } 00:47:59.259 [Pipeline] // stage 00:47:59.265 [Pipeline] } 00:47:59.284 [Pipeline] // timeout 00:47:59.290 [Pipeline] } 00:47:59.309 [Pipeline] // catchError 00:47:59.315 [Pipeline] } 00:47:59.333 [Pipeline] // wrap 00:47:59.340 [Pipeline] } 00:47:59.356 [Pipeline] // catchError 00:47:59.366 [Pipeline] stage 00:47:59.368 [Pipeline] { (Epilogue) 00:47:59.384 [Pipeline] catchError 00:47:59.386 [Pipeline] { 00:47:59.398 [Pipeline] echo 00:47:59.400 Cleanup processes 00:47:59.406 [Pipeline] sh 00:47:59.694 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.694 2581040 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:59.694 2581463 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.708 [Pipeline] sh 00:47:59.994 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:59.994 ++ grep -v 'sudo pgrep' 00:47:59.994 ++ awk '{print $1}' 00:47:59.994 + sudo kill -9 2581040 00:48:00.007 [Pipeline] sh 00:48:00.295 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:15.213 [Pipeline] sh 00:48:15.500 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:15.500 Artifacts sizes are good 00:48:15.515 [Pipeline] archiveArtifacts 00:48:15.523 Archiving artifacts 00:48:15.714 [Pipeline] sh 00:48:15.997 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:16.018 [Pipeline] cleanWs 00:48:16.028 [WS-CLEANUP] Deleting project workspace... 00:48:16.028 [WS-CLEANUP] Deferred wipeout is used... 00:48:16.036 [WS-CLEANUP] done 00:48:16.038 [Pipeline] } 00:48:16.062 [Pipeline] // catchError 00:48:16.075 [Pipeline] sh 00:48:16.361 + logger -p user.info -t JENKINS-CI 00:48:16.372 [Pipeline] } 00:48:16.389 [Pipeline] // stage 00:48:16.394 [Pipeline] } 00:48:16.412 [Pipeline] // node 00:48:16.417 [Pipeline] End of Pipeline 00:48:16.453 Finished: SUCCESS